Test Report: KVM_Linux_crio 19307

                    
                      5a24b9ce483ba531c92412d298617e78cc9898c8:2024-07-19:35418
                    
                

Test fail (29/326)

Order failed test Duration
39 TestAddons/parallel/Ingress 154.91
41 TestAddons/parallel/MetricsServer 315.01
54 TestAddons/StoppedEnableDisable 154.21
173 TestMultiControlPlane/serial/StopSecondaryNode 141.73
175 TestMultiControlPlane/serial/RestartSecondaryNode 58.74
177 TestMultiControlPlane/serial/RestartClusterKeepsNodes 356.98
180 TestMultiControlPlane/serial/StopCluster 141.61
240 TestMultiNode/serial/RestartKeepsNodes 323.78
242 TestMultiNode/serial/StopMultiNode 141.25
249 TestPreload 179.1
257 TestKubernetesUpgrade 418.49
293 TestStartStop/group/old-k8s-version/serial/FirstStart 271.36
309 TestStartStop/group/no-preload/serial/Stop 139.12
313 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.15
315 TestStartStop/group/embed-certs/serial/Stop 139.14
316 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.39
318 TestStartStop/group/old-k8s-version/serial/DeployApp 0.47
319 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 98.97
320 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
321 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
326 TestStartStop/group/old-k8s-version/serial/SecondStart 727.68
327 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.1
328 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.14
329 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.11
330 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.33
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 432.79
332 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 430.03
333 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 312.4
334 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 157.09
x
+
TestAddons/parallel/Ingress (154.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-990895 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-990895 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-990895 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [65ee0f77-c5d6-40a1-9307-10d20caec993] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [65ee0f77-c5d6-40a1-9307-10d20caec993] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003177197s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-990895 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-990895 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.126132369s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-990895 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-990895 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.239
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-990895 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-990895 addons disable ingress-dns --alsologtostderr -v=1: (1.285268449s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-990895 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-990895 addons disable ingress --alsologtostderr -v=1: (7.653592421s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-990895 -n addons-990895
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-990895 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-990895 logs -n 25: (1.193215435s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.1 | 19 Jul 24 18:15 UTC | 19 Jul 24 18:15 UTC |
	| delete  | -p download-only-111401                                                                     | download-only-111401 | jenkins | v1.33.1 | 19 Jul 24 18:15 UTC | 19 Jul 24 18:15 UTC |
	| delete  | -p download-only-928056                                                                     | download-only-928056 | jenkins | v1.33.1 | 19 Jul 24 18:15 UTC | 19 Jul 24 18:15 UTC |
	| delete  | -p download-only-106215                                                                     | download-only-106215 | jenkins | v1.33.1 | 19 Jul 24 18:15 UTC | 19 Jul 24 18:15 UTC |
	| delete  | -p download-only-111401                                                                     | download-only-111401 | jenkins | v1.33.1 | 19 Jul 24 18:15 UTC | 19 Jul 24 18:15 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-679369 | jenkins | v1.33.1 | 19 Jul 24 18:15 UTC |                     |
	|         | binary-mirror-679369                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:37635                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-679369                                                                     | binary-mirror-679369 | jenkins | v1.33.1 | 19 Jul 24 18:15 UTC | 19 Jul 24 18:15 UTC |
	| addons  | enable dashboard -p                                                                         | addons-990895        | jenkins | v1.33.1 | 19 Jul 24 18:15 UTC |                     |
	|         | addons-990895                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-990895        | jenkins | v1.33.1 | 19 Jul 24 18:15 UTC |                     |
	|         | addons-990895                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-990895 --wait=true                                                                | addons-990895        | jenkins | v1.33.1 | 19 Jul 24 18:15 UTC | 19 Jul 24 18:17 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-990895        | jenkins | v1.33.1 | 19 Jul 24 18:17 UTC | 19 Jul 24 18:17 UTC |
	|         | -p addons-990895                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-990895        | jenkins | v1.33.1 | 19 Jul 24 18:17 UTC | 19 Jul 24 18:17 UTC |
	|         | addons-990895                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-990895        | jenkins | v1.33.1 | 19 Jul 24 18:17 UTC | 19 Jul 24 18:17 UTC |
	|         | -p addons-990895                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-990895        | jenkins | v1.33.1 | 19 Jul 24 18:17 UTC | 19 Jul 24 18:17 UTC |
	|         | addons-990895                                                                               |                      |         |         |                     |                     |
	| ip      | addons-990895 ip                                                                            | addons-990895        | jenkins | v1.33.1 | 19 Jul 24 18:17 UTC | 19 Jul 24 18:17 UTC |
	| addons  | addons-990895 addons disable                                                                | addons-990895        | jenkins | v1.33.1 | 19 Jul 24 18:17 UTC | 19 Jul 24 18:17 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-990895 ssh cat                                                                       | addons-990895        | jenkins | v1.33.1 | 19 Jul 24 18:17 UTC | 19 Jul 24 18:17 UTC |
	|         | /opt/local-path-provisioner/pvc-31780053-8619-4414-948c-908236bd874e_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-990895 addons disable                                                                | addons-990895        | jenkins | v1.33.1 | 19 Jul 24 18:17 UTC | 19 Jul 24 18:18 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-990895 addons disable                                                                | addons-990895        | jenkins | v1.33.1 | 19 Jul 24 18:18 UTC | 19 Jul 24 18:18 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-990895 ssh curl -s                                                                   | addons-990895        | jenkins | v1.33.1 | 19 Jul 24 18:18 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-990895 addons                                                                        | addons-990895        | jenkins | v1.33.1 | 19 Jul 24 18:18 UTC | 19 Jul 24 18:19 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-990895 addons                                                                        | addons-990895        | jenkins | v1.33.1 | 19 Jul 24 18:19 UTC | 19 Jul 24 18:19 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-990895 ip                                                                            | addons-990895        | jenkins | v1.33.1 | 19 Jul 24 18:20 UTC | 19 Jul 24 18:20 UTC |
	| addons  | addons-990895 addons disable                                                                | addons-990895        | jenkins | v1.33.1 | 19 Jul 24 18:20 UTC | 19 Jul 24 18:20 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-990895 addons disable                                                                | addons-990895        | jenkins | v1.33.1 | 19 Jul 24 18:20 UTC | 19 Jul 24 18:20 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 18:15:12
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 18:15:12.655117   23130 out.go:291] Setting OutFile to fd 1 ...
	I0719 18:15:12.655349   23130 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:15:12.655357   23130 out.go:304] Setting ErrFile to fd 2...
	I0719 18:15:12.655362   23130 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:15:12.655529   23130 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 18:15:12.656112   23130 out.go:298] Setting JSON to false
	I0719 18:15:12.656903   23130 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3456,"bootTime":1721409457,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 18:15:12.656956   23130 start.go:139] virtualization: kvm guest
	I0719 18:15:12.659320   23130 out.go:177] * [addons-990895] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 18:15:12.660799   23130 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 18:15:12.660802   23130 notify.go:220] Checking for updates...
	I0719 18:15:12.662279   23130 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 18:15:12.663778   23130 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 18:15:12.664953   23130 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 18:15:12.666090   23130 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 18:15:12.667225   23130 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 18:15:12.668541   23130 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 18:15:12.699022   23130 out.go:177] * Using the kvm2 driver based on user configuration
	I0719 18:15:12.700219   23130 start.go:297] selected driver: kvm2
	I0719 18:15:12.700237   23130 start.go:901] validating driver "kvm2" against <nil>
	I0719 18:15:12.700248   23130 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 18:15:12.700929   23130 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 18:15:12.700989   23130 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19307-14841/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 18:15:12.715208   23130 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 18:15:12.715252   23130 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 18:15:12.715471   23130 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 18:15:12.715500   23130 cni.go:84] Creating CNI manager for ""
	I0719 18:15:12.715507   23130 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 18:15:12.715517   23130 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 18:15:12.715569   23130 start.go:340] cluster config:
	{Name:addons-990895 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-990895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 18:15:12.715660   23130 iso.go:125] acquiring lock: {Name:mka126a3710ab8a115eb2a3299b735524430e5f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 18:15:12.718125   23130 out.go:177] * Starting "addons-990895" primary control-plane node in "addons-990895" cluster
	I0719 18:15:12.719313   23130 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 18:15:12.719352   23130 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 18:15:12.719359   23130 cache.go:56] Caching tarball of preloaded images
	I0719 18:15:12.719441   23130 preload.go:172] Found /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 18:15:12.719451   23130 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 18:15:12.719725   23130 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/config.json ...
	I0719 18:15:12.719743   23130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/config.json: {Name:mkcd6448880e86d1c1f3f5ac7ee8b02df9aea5b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:15:12.719899   23130 start.go:360] acquireMachinesLock for addons-990895: {Name:mk45aeee801587237bb965fa260501e9fac6f233 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 18:15:12.719950   23130 start.go:364] duration metric: took 36µs to acquireMachinesLock for "addons-990895"
	I0719 18:15:12.719968   23130 start.go:93] Provisioning new machine with config: &{Name:addons-990895 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-990895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 18:15:12.720020   23130 start.go:125] createHost starting for "" (driver="kvm2")
	I0719 18:15:12.722121   23130 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0719 18:15:12.722228   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:15:12.722260   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:15:12.736035   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39613
	I0719 18:15:12.736509   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:15:12.737112   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:15:12.737135   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:15:12.737507   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:15:12.737676   23130 main.go:141] libmachine: (addons-990895) Calling .GetMachineName
	I0719 18:15:12.737824   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:15:12.737972   23130 start.go:159] libmachine.API.Create for "addons-990895" (driver="kvm2")
	I0719 18:15:12.737997   23130 client.go:168] LocalClient.Create starting
	I0719 18:15:12.738029   23130 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem
	I0719 18:15:12.803063   23130 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem
	I0719 18:15:12.928516   23130 main.go:141] libmachine: Running pre-create checks...
	I0719 18:15:12.928536   23130 main.go:141] libmachine: (addons-990895) Calling .PreCreateCheck
	I0719 18:15:12.929040   23130 main.go:141] libmachine: (addons-990895) Calling .GetConfigRaw
	I0719 18:15:12.929471   23130 main.go:141] libmachine: Creating machine...
	I0719 18:15:12.929484   23130 main.go:141] libmachine: (addons-990895) Calling .Create
	I0719 18:15:12.929595   23130 main.go:141] libmachine: (addons-990895) Creating KVM machine...
	I0719 18:15:12.930906   23130 main.go:141] libmachine: (addons-990895) DBG | found existing default KVM network
	I0719 18:15:12.931602   23130 main.go:141] libmachine: (addons-990895) DBG | I0719 18:15:12.931477   23151 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d990}
	I0719 18:15:12.931629   23130 main.go:141] libmachine: (addons-990895) DBG | created network xml: 
	I0719 18:15:12.931643   23130 main.go:141] libmachine: (addons-990895) DBG | <network>
	I0719 18:15:12.931656   23130 main.go:141] libmachine: (addons-990895) DBG |   <name>mk-addons-990895</name>
	I0719 18:15:12.931666   23130 main.go:141] libmachine: (addons-990895) DBG |   <dns enable='no'/>
	I0719 18:15:12.931675   23130 main.go:141] libmachine: (addons-990895) DBG |   
	I0719 18:15:12.931682   23130 main.go:141] libmachine: (addons-990895) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0719 18:15:12.931700   23130 main.go:141] libmachine: (addons-990895) DBG |     <dhcp>
	I0719 18:15:12.931709   23130 main.go:141] libmachine: (addons-990895) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0719 18:15:12.931716   23130 main.go:141] libmachine: (addons-990895) DBG |     </dhcp>
	I0719 18:15:12.931721   23130 main.go:141] libmachine: (addons-990895) DBG |   </ip>
	I0719 18:15:12.931731   23130 main.go:141] libmachine: (addons-990895) DBG |   
	I0719 18:15:12.931741   23130 main.go:141] libmachine: (addons-990895) DBG | </network>
	I0719 18:15:12.931751   23130 main.go:141] libmachine: (addons-990895) DBG | 
	I0719 18:15:12.936991   23130 main.go:141] libmachine: (addons-990895) DBG | trying to create private KVM network mk-addons-990895 192.168.39.0/24...
	I0719 18:15:12.999137   23130 main.go:141] libmachine: (addons-990895) DBG | private KVM network mk-addons-990895 192.168.39.0/24 created
	I0719 18:15:12.999170   23130 main.go:141] libmachine: (addons-990895) Setting up store path in /home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895 ...
	I0719 18:15:12.999197   23130 main.go:141] libmachine: (addons-990895) DBG | I0719 18:15:12.999100   23151 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 18:15:12.999210   23130 main.go:141] libmachine: (addons-990895) Building disk image from file:///home/jenkins/minikube-integration/19307-14841/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 18:15:12.999247   23130 main.go:141] libmachine: (addons-990895) Downloading /home/jenkins/minikube-integration/19307-14841/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19307-14841/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 18:15:13.253348   23130 main.go:141] libmachine: (addons-990895) DBG | I0719 18:15:13.253206   23151 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa...
	I0719 18:15:13.542394   23130 main.go:141] libmachine: (addons-990895) DBG | I0719 18:15:13.542268   23151 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/addons-990895.rawdisk...
	I0719 18:15:13.542436   23130 main.go:141] libmachine: (addons-990895) DBG | Writing magic tar header
	I0719 18:15:13.542445   23130 main.go:141] libmachine: (addons-990895) DBG | Writing SSH key tar header
	I0719 18:15:13.542460   23130 main.go:141] libmachine: (addons-990895) DBG | I0719 18:15:13.542389   23151 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895 ...
	I0719 18:15:13.542471   23130 main.go:141] libmachine: (addons-990895) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895
	I0719 18:15:13.542537   23130 main.go:141] libmachine: (addons-990895) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895 (perms=drwx------)
	I0719 18:15:13.542575   23130 main.go:141] libmachine: (addons-990895) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841/.minikube/machines (perms=drwxr-xr-x)
	I0719 18:15:13.542586   23130 main.go:141] libmachine: (addons-990895) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841/.minikube/machines
	I0719 18:15:13.542595   23130 main.go:141] libmachine: (addons-990895) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 18:15:13.542601   23130 main.go:141] libmachine: (addons-990895) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841
	I0719 18:15:13.542609   23130 main.go:141] libmachine: (addons-990895) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0719 18:15:13.542615   23130 main.go:141] libmachine: (addons-990895) DBG | Checking permissions on dir: /home/jenkins
	I0719 18:15:13.542625   23130 main.go:141] libmachine: (addons-990895) DBG | Checking permissions on dir: /home
	I0719 18:15:13.542634   23130 main.go:141] libmachine: (addons-990895) DBG | Skipping /home - not owner
	I0719 18:15:13.542645   23130 main.go:141] libmachine: (addons-990895) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841/.minikube (perms=drwxr-xr-x)
	I0719 18:15:13.542661   23130 main.go:141] libmachine: (addons-990895) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841 (perms=drwxrwxr-x)
	I0719 18:15:13.542670   23130 main.go:141] libmachine: (addons-990895) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0719 18:15:13.542689   23130 main.go:141] libmachine: (addons-990895) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0719 18:15:13.542700   23130 main.go:141] libmachine: (addons-990895) Creating domain...
	I0719 18:15:13.543651   23130 main.go:141] libmachine: (addons-990895) define libvirt domain using xml: 
	I0719 18:15:13.543669   23130 main.go:141] libmachine: (addons-990895) <domain type='kvm'>
	I0719 18:15:13.543679   23130 main.go:141] libmachine: (addons-990895)   <name>addons-990895</name>
	I0719 18:15:13.543687   23130 main.go:141] libmachine: (addons-990895)   <memory unit='MiB'>4000</memory>
	I0719 18:15:13.543696   23130 main.go:141] libmachine: (addons-990895)   <vcpu>2</vcpu>
	I0719 18:15:13.543702   23130 main.go:141] libmachine: (addons-990895)   <features>
	I0719 18:15:13.543709   23130 main.go:141] libmachine: (addons-990895)     <acpi/>
	I0719 18:15:13.543712   23130 main.go:141] libmachine: (addons-990895)     <apic/>
	I0719 18:15:13.543728   23130 main.go:141] libmachine: (addons-990895)     <pae/>
	I0719 18:15:13.543736   23130 main.go:141] libmachine: (addons-990895)     
	I0719 18:15:13.543741   23130 main.go:141] libmachine: (addons-990895)   </features>
	I0719 18:15:13.543746   23130 main.go:141] libmachine: (addons-990895)   <cpu mode='host-passthrough'>
	I0719 18:15:13.543751   23130 main.go:141] libmachine: (addons-990895)   
	I0719 18:15:13.543759   23130 main.go:141] libmachine: (addons-990895)   </cpu>
	I0719 18:15:13.543764   23130 main.go:141] libmachine: (addons-990895)   <os>
	I0719 18:15:13.543770   23130 main.go:141] libmachine: (addons-990895)     <type>hvm</type>
	I0719 18:15:13.543776   23130 main.go:141] libmachine: (addons-990895)     <boot dev='cdrom'/>
	I0719 18:15:13.543781   23130 main.go:141] libmachine: (addons-990895)     <boot dev='hd'/>
	I0719 18:15:13.543786   23130 main.go:141] libmachine: (addons-990895)     <bootmenu enable='no'/>
	I0719 18:15:13.543792   23130 main.go:141] libmachine: (addons-990895)   </os>
	I0719 18:15:13.543797   23130 main.go:141] libmachine: (addons-990895)   <devices>
	I0719 18:15:13.543808   23130 main.go:141] libmachine: (addons-990895)     <disk type='file' device='cdrom'>
	I0719 18:15:13.543818   23130 main.go:141] libmachine: (addons-990895)       <source file='/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/boot2docker.iso'/>
	I0719 18:15:13.543825   23130 main.go:141] libmachine: (addons-990895)       <target dev='hdc' bus='scsi'/>
	I0719 18:15:13.543842   23130 main.go:141] libmachine: (addons-990895)       <readonly/>
	I0719 18:15:13.543856   23130 main.go:141] libmachine: (addons-990895)     </disk>
	I0719 18:15:13.543880   23130 main.go:141] libmachine: (addons-990895)     <disk type='file' device='disk'>
	I0719 18:15:13.543913   23130 main.go:141] libmachine: (addons-990895)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0719 18:15:13.543930   23130 main.go:141] libmachine: (addons-990895)       <source file='/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/addons-990895.rawdisk'/>
	I0719 18:15:13.543938   23130 main.go:141] libmachine: (addons-990895)       <target dev='hda' bus='virtio'/>
	I0719 18:15:13.543944   23130 main.go:141] libmachine: (addons-990895)     </disk>
	I0719 18:15:13.543951   23130 main.go:141] libmachine: (addons-990895)     <interface type='network'>
	I0719 18:15:13.543958   23130 main.go:141] libmachine: (addons-990895)       <source network='mk-addons-990895'/>
	I0719 18:15:13.543965   23130 main.go:141] libmachine: (addons-990895)       <model type='virtio'/>
	I0719 18:15:13.543990   23130 main.go:141] libmachine: (addons-990895)     </interface>
	I0719 18:15:13.544012   23130 main.go:141] libmachine: (addons-990895)     <interface type='network'>
	I0719 18:15:13.544026   23130 main.go:141] libmachine: (addons-990895)       <source network='default'/>
	I0719 18:15:13.544036   23130 main.go:141] libmachine: (addons-990895)       <model type='virtio'/>
	I0719 18:15:13.544050   23130 main.go:141] libmachine: (addons-990895)     </interface>
	I0719 18:15:13.544060   23130 main.go:141] libmachine: (addons-990895)     <serial type='pty'>
	I0719 18:15:13.544070   23130 main.go:141] libmachine: (addons-990895)       <target port='0'/>
	I0719 18:15:13.544085   23130 main.go:141] libmachine: (addons-990895)     </serial>
	I0719 18:15:13.544097   23130 main.go:141] libmachine: (addons-990895)     <console type='pty'>
	I0719 18:15:13.544107   23130 main.go:141] libmachine: (addons-990895)       <target type='serial' port='0'/>
	I0719 18:15:13.544117   23130 main.go:141] libmachine: (addons-990895)     </console>
	I0719 18:15:13.544127   23130 main.go:141] libmachine: (addons-990895)     <rng model='virtio'>
	I0719 18:15:13.544139   23130 main.go:141] libmachine: (addons-990895)       <backend model='random'>/dev/random</backend>
	I0719 18:15:13.544153   23130 main.go:141] libmachine: (addons-990895)     </rng>
	I0719 18:15:13.544164   23130 main.go:141] libmachine: (addons-990895)     
	I0719 18:15:13.544175   23130 main.go:141] libmachine: (addons-990895)     
	I0719 18:15:13.544186   23130 main.go:141] libmachine: (addons-990895)   </devices>
	I0719 18:15:13.544196   23130 main.go:141] libmachine: (addons-990895) </domain>
	I0719 18:15:13.544210   23130 main.go:141] libmachine: (addons-990895) 
	I0719 18:15:13.549953   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:00:3a:78 in network default
	I0719 18:15:13.550604   23130 main.go:141] libmachine: (addons-990895) Ensuring networks are active...
	I0719 18:15:13.550623   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:13.551190   23130 main.go:141] libmachine: (addons-990895) Ensuring network default is active
	I0719 18:15:13.551487   23130 main.go:141] libmachine: (addons-990895) Ensuring network mk-addons-990895 is active
	I0719 18:15:13.551978   23130 main.go:141] libmachine: (addons-990895) Getting domain xml...
	I0719 18:15:13.552632   23130 main.go:141] libmachine: (addons-990895) Creating domain...
	I0719 18:15:14.957544   23130 main.go:141] libmachine: (addons-990895) Waiting to get IP...
	I0719 18:15:14.958478   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:14.958923   23130 main.go:141] libmachine: (addons-990895) DBG | unable to find current IP address of domain addons-990895 in network mk-addons-990895
	I0719 18:15:14.958962   23130 main.go:141] libmachine: (addons-990895) DBG | I0719 18:15:14.958915   23151 retry.go:31] will retry after 253.11897ms: waiting for machine to come up
	I0719 18:15:15.213417   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:15.213856   23130 main.go:141] libmachine: (addons-990895) DBG | unable to find current IP address of domain addons-990895 in network mk-addons-990895
	I0719 18:15:15.213883   23130 main.go:141] libmachine: (addons-990895) DBG | I0719 18:15:15.213809   23151 retry.go:31] will retry after 301.657933ms: waiting for machine to come up
	I0719 18:15:15.516593   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:15.517045   23130 main.go:141] libmachine: (addons-990895) DBG | unable to find current IP address of domain addons-990895 in network mk-addons-990895
	I0719 18:15:15.517076   23130 main.go:141] libmachine: (addons-990895) DBG | I0719 18:15:15.516993   23151 retry.go:31] will retry after 344.420609ms: waiting for machine to come up
	I0719 18:15:15.863556   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:15.863982   23130 main.go:141] libmachine: (addons-990895) DBG | unable to find current IP address of domain addons-990895 in network mk-addons-990895
	I0719 18:15:15.864006   23130 main.go:141] libmachine: (addons-990895) DBG | I0719 18:15:15.863925   23151 retry.go:31] will retry after 515.563894ms: waiting for machine to come up
	I0719 18:15:16.380427   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:16.380964   23130 main.go:141] libmachine: (addons-990895) DBG | unable to find current IP address of domain addons-990895 in network mk-addons-990895
	I0719 18:15:16.380996   23130 main.go:141] libmachine: (addons-990895) DBG | I0719 18:15:16.380891   23151 retry.go:31] will retry after 477.23814ms: waiting for machine to come up
	I0719 18:15:16.859648   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:16.860113   23130 main.go:141] libmachine: (addons-990895) DBG | unable to find current IP address of domain addons-990895 in network mk-addons-990895
	I0719 18:15:16.860140   23130 main.go:141] libmachine: (addons-990895) DBG | I0719 18:15:16.860056   23151 retry.go:31] will retry after 720.689928ms: waiting for machine to come up
	I0719 18:15:17.581910   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:17.582390   23130 main.go:141] libmachine: (addons-990895) DBG | unable to find current IP address of domain addons-990895 in network mk-addons-990895
	I0719 18:15:17.582418   23130 main.go:141] libmachine: (addons-990895) DBG | I0719 18:15:17.582340   23151 retry.go:31] will retry after 1.039280451s: waiting for machine to come up
	I0719 18:15:18.622660   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:18.623063   23130 main.go:141] libmachine: (addons-990895) DBG | unable to find current IP address of domain addons-990895 in network mk-addons-990895
	I0719 18:15:18.623103   23130 main.go:141] libmachine: (addons-990895) DBG | I0719 18:15:18.623020   23151 retry.go:31] will retry after 1.06682594s: waiting for machine to come up
	I0719 18:15:19.691356   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:19.691787   23130 main.go:141] libmachine: (addons-990895) DBG | unable to find current IP address of domain addons-990895 in network mk-addons-990895
	I0719 18:15:19.691825   23130 main.go:141] libmachine: (addons-990895) DBG | I0719 18:15:19.691750   23151 retry.go:31] will retry after 1.440824705s: waiting for machine to come up
	I0719 18:15:21.133643   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:21.134051   23130 main.go:141] libmachine: (addons-990895) DBG | unable to find current IP address of domain addons-990895 in network mk-addons-990895
	I0719 18:15:21.134075   23130 main.go:141] libmachine: (addons-990895) DBG | I0719 18:15:21.134026   23151 retry.go:31] will retry after 2.215935281s: waiting for machine to come up
	I0719 18:15:23.351043   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:23.351612   23130 main.go:141] libmachine: (addons-990895) DBG | unable to find current IP address of domain addons-990895 in network mk-addons-990895
	I0719 18:15:23.351635   23130 main.go:141] libmachine: (addons-990895) DBG | I0719 18:15:23.351552   23151 retry.go:31] will retry after 2.178016178s: waiting for machine to come up
	I0719 18:15:25.531912   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:25.532303   23130 main.go:141] libmachine: (addons-990895) DBG | unable to find current IP address of domain addons-990895 in network mk-addons-990895
	I0719 18:15:25.532355   23130 main.go:141] libmachine: (addons-990895) DBG | I0719 18:15:25.532253   23151 retry.go:31] will retry after 3.627193721s: waiting for machine to come up
	I0719 18:15:29.160706   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:29.161136   23130 main.go:141] libmachine: (addons-990895) DBG | unable to find current IP address of domain addons-990895 in network mk-addons-990895
	I0719 18:15:29.161173   23130 main.go:141] libmachine: (addons-990895) DBG | I0719 18:15:29.161093   23151 retry.go:31] will retry after 4.493345828s: waiting for machine to come up
	I0719 18:15:33.656508   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:33.656961   23130 main.go:141] libmachine: (addons-990895) Found IP for machine: 192.168.39.239
	I0719 18:15:33.656980   23130 main.go:141] libmachine: (addons-990895) Reserving static IP address...
	I0719 18:15:33.656991   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has current primary IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:33.657395   23130 main.go:141] libmachine: (addons-990895) DBG | unable to find host DHCP lease matching {name: "addons-990895", mac: "52:54:00:15:90:82", ip: "192.168.39.239"} in network mk-addons-990895
	I0719 18:15:33.726483   23130 main.go:141] libmachine: (addons-990895) Reserved static IP address: 192.168.39.239
	I0719 18:15:33.726516   23130 main.go:141] libmachine: (addons-990895) Waiting for SSH to be available...
	I0719 18:15:33.726532   23130 main.go:141] libmachine: (addons-990895) DBG | Getting to WaitForSSH function...
	I0719 18:15:33.729220   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:33.729693   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:minikube Clientid:01:52:54:00:15:90:82}
	I0719 18:15:33.729726   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:33.729911   23130 main.go:141] libmachine: (addons-990895) DBG | Using SSH client type: external
	I0719 18:15:33.729935   23130 main.go:141] libmachine: (addons-990895) DBG | Using SSH private key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa (-rw-------)
	I0719 18:15:33.729961   23130 main.go:141] libmachine: (addons-990895) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.239 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 18:15:33.729972   23130 main.go:141] libmachine: (addons-990895) DBG | About to run SSH command:
	I0719 18:15:33.729980   23130 main.go:141] libmachine: (addons-990895) DBG | exit 0
	I0719 18:15:33.867921   23130 main.go:141] libmachine: (addons-990895) DBG | SSH cmd err, output: <nil>: 
	I0719 18:15:33.868216   23130 main.go:141] libmachine: (addons-990895) KVM machine creation complete!
	I0719 18:15:33.868448   23130 main.go:141] libmachine: (addons-990895) Calling .GetConfigRaw
	I0719 18:15:33.868986   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:15:33.869162   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:15:33.869340   23130 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0719 18:15:33.869354   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:15:33.870449   23130 main.go:141] libmachine: Detecting operating system of created instance...
	I0719 18:15:33.870462   23130 main.go:141] libmachine: Waiting for SSH to be available...
	I0719 18:15:33.870468   23130 main.go:141] libmachine: Getting to WaitForSSH function...
	I0719 18:15:33.870474   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:15:33.872587   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:33.872980   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:15:33.873006   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:33.873136   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:15:33.873271   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:15:33.873415   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:15:33.873519   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:15:33.873661   23130 main.go:141] libmachine: Using SSH client type: native
	I0719 18:15:33.873839   23130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0719 18:15:33.873849   23130 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0719 18:15:33.971162   23130 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 18:15:33.971183   23130 main.go:141] libmachine: Detecting the provisioner...
	I0719 18:15:33.971193   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:15:33.974045   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:33.974403   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:15:33.974426   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:33.974587   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:15:33.974757   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:15:33.974900   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:15:33.975109   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:15:33.975307   23130 main.go:141] libmachine: Using SSH client type: native
	I0719 18:15:33.975563   23130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0719 18:15:33.975577   23130 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0719 18:15:34.072309   23130 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0719 18:15:34.072407   23130 main.go:141] libmachine: found compatible host: buildroot
	I0719 18:15:34.072421   23130 main.go:141] libmachine: Provisioning with buildroot...
	I0719 18:15:34.072433   23130 main.go:141] libmachine: (addons-990895) Calling .GetMachineName
	I0719 18:15:34.072797   23130 buildroot.go:166] provisioning hostname "addons-990895"
	I0719 18:15:34.072821   23130 main.go:141] libmachine: (addons-990895) Calling .GetMachineName
	I0719 18:15:34.073037   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:15:34.075464   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:34.075810   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:15:34.075831   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:34.076035   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:15:34.076205   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:15:34.076365   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:15:34.076518   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:15:34.076691   23130 main.go:141] libmachine: Using SSH client type: native
	I0719 18:15:34.076863   23130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0719 18:15:34.076875   23130 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-990895 && echo "addons-990895" | sudo tee /etc/hostname
	I0719 18:15:34.189485   23130 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-990895
	
	I0719 18:15:34.189526   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:15:34.192401   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:34.192736   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:15:34.192778   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:34.192899   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:15:34.193105   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:15:34.193286   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:15:34.193428   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:15:34.193590   23130 main.go:141] libmachine: Using SSH client type: native
	I0719 18:15:34.193754   23130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0719 18:15:34.193768   23130 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-990895' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-990895/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-990895' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 18:15:34.299726   23130 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 18:15:34.299762   23130 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 18:15:34.299795   23130 buildroot.go:174] setting up certificates
	I0719 18:15:34.299808   23130 provision.go:84] configureAuth start
	I0719 18:15:34.299827   23130 main.go:141] libmachine: (addons-990895) Calling .GetMachineName
	I0719 18:15:34.300161   23130 main.go:141] libmachine: (addons-990895) Calling .GetIP
	I0719 18:15:34.302973   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:34.303339   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:15:34.303379   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:34.303557   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:15:34.305877   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:34.306295   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:15:34.306318   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:34.306506   23130 provision.go:143] copyHostCerts
	I0719 18:15:34.306570   23130 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 18:15:34.306692   23130 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 18:15:34.306771   23130 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 18:15:34.306827   23130 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.addons-990895 san=[127.0.0.1 192.168.39.239 addons-990895 localhost minikube]
	I0719 18:15:34.568397   23130 provision.go:177] copyRemoteCerts
	I0719 18:15:34.568459   23130 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 18:15:34.568491   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:15:34.571014   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:34.571279   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:15:34.571308   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:34.571457   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:15:34.571653   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:15:34.571793   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:15:34.571931   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	I0719 18:15:34.649873   23130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 18:15:34.672039   23130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0719 18:15:34.694665   23130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 18:15:34.716480   23130 provision.go:87] duration metric: took 416.656805ms to configureAuth
	I0719 18:15:34.716502   23130 buildroot.go:189] setting minikube options for container-runtime
	I0719 18:15:34.716679   23130 config.go:182] Loaded profile config "addons-990895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:15:34.716762   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:15:34.719980   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:34.720467   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:15:34.720493   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:34.720767   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:15:34.720997   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:15:34.721163   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:15:34.721324   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:15:34.721521   23130 main.go:141] libmachine: Using SSH client type: native
	I0719 18:15:34.721724   23130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0719 18:15:34.721741   23130 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 18:15:34.964357   23130 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 18:15:34.964391   23130 main.go:141] libmachine: Checking connection to Docker...
	I0719 18:15:34.964402   23130 main.go:141] libmachine: (addons-990895) Calling .GetURL
	I0719 18:15:34.965835   23130 main.go:141] libmachine: (addons-990895) DBG | Using libvirt version 6000000
	I0719 18:15:34.968099   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:34.968571   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:15:34.968605   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:34.968738   23130 main.go:141] libmachine: Docker is up and running!
	I0719 18:15:34.968752   23130 main.go:141] libmachine: Reticulating splines...
	I0719 18:15:34.968759   23130 client.go:171] duration metric: took 22.230753522s to LocalClient.Create
	I0719 18:15:34.968785   23130 start.go:167] duration metric: took 22.230813085s to libmachine.API.Create "addons-990895"
	I0719 18:15:34.968797   23130 start.go:293] postStartSetup for "addons-990895" (driver="kvm2")
	I0719 18:15:34.968811   23130 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 18:15:34.968835   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:15:34.969073   23130 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 18:15:34.969096   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:15:34.971357   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:34.971649   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:15:34.971672   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:34.971821   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:15:34.972051   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:15:34.972194   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:15:34.972327   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	I0719 18:15:35.050527   23130 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 18:15:35.054591   23130 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 18:15:35.054621   23130 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 18:15:35.054697   23130 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 18:15:35.054721   23130 start.go:296] duration metric: took 85.917302ms for postStartSetup
	I0719 18:15:35.054751   23130 main.go:141] libmachine: (addons-990895) Calling .GetConfigRaw
	I0719 18:15:35.055300   23130 main.go:141] libmachine: (addons-990895) Calling .GetIP
	I0719 18:15:35.057772   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:35.058064   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:15:35.058092   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:35.058350   23130 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/config.json ...
	I0719 18:15:35.058551   23130 start.go:128] duration metric: took 22.338520539s to createHost
	I0719 18:15:35.058577   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:15:35.060842   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:35.061161   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:15:35.061182   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:35.061332   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:15:35.061494   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:15:35.061628   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:15:35.061774   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:15:35.061924   23130 main.go:141] libmachine: Using SSH client type: native
	I0719 18:15:35.062118   23130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0719 18:15:35.062132   23130 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 18:15:35.160508   23130 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721412935.136749392
	
	I0719 18:15:35.160531   23130 fix.go:216] guest clock: 1721412935.136749392
	I0719 18:15:35.160539   23130 fix.go:229] Guest: 2024-07-19 18:15:35.136749392 +0000 UTC Remote: 2024-07-19 18:15:35.058563864 +0000 UTC m=+22.435503273 (delta=78.185528ms)
	I0719 18:15:35.160576   23130 fix.go:200] guest clock delta is within tolerance: 78.185528ms
	I0719 18:15:35.160595   23130 start.go:83] releasing machines lock for "addons-990895", held for 22.440634848s
	I0719 18:15:35.160622   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:15:35.160868   23130 main.go:141] libmachine: (addons-990895) Calling .GetIP
	I0719 18:15:35.163552   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:35.163905   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:15:35.163932   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:35.164061   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:15:35.164558   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:15:35.164750   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:15:35.164890   23130 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 18:15:35.164941   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:15:35.164971   23130 ssh_runner.go:195] Run: cat /version.json
	I0719 18:15:35.164993   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:15:35.167616   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:35.167908   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:15:35.167945   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:35.168006   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:35.168083   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:15:35.168241   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:15:35.168391   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:15:35.168428   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:15:35.168451   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:35.168550   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	I0719 18:15:35.168882   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:15:35.169036   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:15:35.169196   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:15:35.169360   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	I0719 18:15:35.274442   23130 ssh_runner.go:195] Run: systemctl --version
	I0719 18:15:35.279962   23130 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 18:15:35.436149   23130 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 18:15:35.443206   23130 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 18:15:35.443271   23130 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 18:15:35.459973   23130 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 18:15:35.460002   23130 start.go:495] detecting cgroup driver to use...
	I0719 18:15:35.460066   23130 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 18:15:35.474898   23130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 18:15:35.487478   23130 docker.go:217] disabling cri-docker service (if available) ...
	I0719 18:15:35.487535   23130 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 18:15:35.500053   23130 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 18:15:35.512231   23130 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 18:15:35.624844   23130 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 18:15:35.762796   23130 docker.go:233] disabling docker service ...
	I0719 18:15:35.762871   23130 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 18:15:35.776361   23130 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 18:15:35.788829   23130 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 18:15:35.913323   23130 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 18:15:36.028092   23130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 18:15:36.041197   23130 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 18:15:36.058208   23130 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 18:15:36.058264   23130 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:15:36.067481   23130 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 18:15:36.067546   23130 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:15:36.077090   23130 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:15:36.086563   23130 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:15:36.096146   23130 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 18:15:36.105794   23130 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:15:36.115580   23130 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:15:36.130970   23130 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:15:36.140464   23130 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 18:15:36.148842   23130 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 18:15:36.148889   23130 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 18:15:36.160381   23130 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 18:15:36.168764   23130 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 18:15:36.272945   23130 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 18:15:36.399624   23130 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 18:15:36.399716   23130 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 18:15:36.404043   23130 start.go:563] Will wait 60s for crictl version
	I0719 18:15:36.404111   23130 ssh_runner.go:195] Run: which crictl
	I0719 18:15:36.407401   23130 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 18:15:36.443762   23130 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 18:15:36.443907   23130 ssh_runner.go:195] Run: crio --version
	I0719 18:15:36.470011   23130 ssh_runner.go:195] Run: crio --version
	I0719 18:15:36.497687   23130 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 18:15:36.499071   23130 main.go:141] libmachine: (addons-990895) Calling .GetIP
	I0719 18:15:36.501548   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:36.501893   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:15:36.501923   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:36.502132   23130 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 18:15:36.506274   23130 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 18:15:36.517497   23130 kubeadm.go:883] updating cluster {Name:addons-990895 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-990895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.239 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 18:15:36.517610   23130 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 18:15:36.517652   23130 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 18:15:36.546231   23130 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0719 18:15:36.546303   23130 ssh_runner.go:195] Run: which lz4
	I0719 18:15:36.549860   23130 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 18:15:36.554411   23130 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 18:15:36.554434   23130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0719 18:15:37.689583   23130 crio.go:462] duration metric: took 1.139751218s to copy over tarball
	I0719 18:15:37.689658   23130 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 18:15:39.891318   23130 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.201628814s)
	I0719 18:15:39.891355   23130 crio.go:469] duration metric: took 2.201746484s to extract the tarball
	I0719 18:15:39.891366   23130 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 18:15:39.942005   23130 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 18:15:39.984400   23130 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 18:15:39.984424   23130 cache_images.go:84] Images are preloaded, skipping loading
	I0719 18:15:39.984434   23130 kubeadm.go:934] updating node { 192.168.39.239 8443 v1.30.3 crio true true} ...
	I0719 18:15:39.984570   23130 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-990895 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.239
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-990895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 18:15:39.984657   23130 ssh_runner.go:195] Run: crio config
	I0719 18:15:40.031283   23130 cni.go:84] Creating CNI manager for ""
	I0719 18:15:40.031303   23130 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 18:15:40.031315   23130 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 18:15:40.031339   23130 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.239 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-990895 NodeName:addons-990895 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.239"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.239 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 18:15:40.031475   23130 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.239
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-990895"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.239
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.239"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 18:15:40.031535   23130 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 18:15:40.040582   23130 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 18:15:40.040653   23130 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 18:15:40.049121   23130 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0719 18:15:40.064016   23130 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 18:15:40.078515   23130 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0719 18:15:40.092955   23130 ssh_runner.go:195] Run: grep 192.168.39.239	control-plane.minikube.internal$ /etc/hosts
	I0719 18:15:40.096256   23130 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.239	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 18:15:40.107024   23130 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 18:15:40.220066   23130 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 18:15:40.235629   23130 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895 for IP: 192.168.39.239
	I0719 18:15:40.235658   23130 certs.go:194] generating shared ca certs ...
	I0719 18:15:40.235677   23130 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:15:40.235858   23130 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 18:15:40.336611   23130 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt ...
	I0719 18:15:40.336639   23130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt: {Name:mk27146b57328cab1c604a664f4363e78005c7c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:15:40.336804   23130 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key ...
	I0719 18:15:40.336815   23130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key: {Name:mk6755898b6f2bbefdc4f70789d7f9202127771b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:15:40.336915   23130 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 18:15:40.610789   23130 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt ...
	I0719 18:15:40.610815   23130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt: {Name:mk6e8498ab120800dd3d9c1317094f77a9ea2c15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:15:40.610971   23130 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key ...
	I0719 18:15:40.610982   23130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key: {Name:mk011c1c36825827521b2335be839910ab841162 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:15:40.611045   23130 certs.go:256] generating profile certs ...
	I0719 18:15:40.611094   23130 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.key
	I0719 18:15:40.611107   23130 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt with IP's: []
	I0719 18:15:40.988492   23130 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt ...
	I0719 18:15:40.988523   23130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: {Name:mk0b9ad5430de0ecaa4bc22d0cba626c3711c0be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:15:40.988679   23130 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.key ...
	I0719 18:15:40.988690   23130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.key: {Name:mk2d8c7920bcdee37d6f657890a1810a1bb0536e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:15:40.988757   23130 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/apiserver.key.d17ebd02
	I0719 18:15:40.988774   23130 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/apiserver.crt.d17ebd02 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.239]
	I0719 18:15:41.330135   23130 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/apiserver.crt.d17ebd02 ...
	I0719 18:15:41.330164   23130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/apiserver.crt.d17ebd02: {Name:mkab0302e3d233887146ad77168048a8f6f68639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:15:41.330324   23130 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/apiserver.key.d17ebd02 ...
	I0719 18:15:41.330337   23130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/apiserver.key.d17ebd02: {Name:mk9119c7f1277d15d206e380b7556c1b61dc9bd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:15:41.330406   23130 certs.go:381] copying /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/apiserver.crt.d17ebd02 -> /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/apiserver.crt
	I0719 18:15:41.330474   23130 certs.go:385] copying /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/apiserver.key.d17ebd02 -> /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/apiserver.key
	I0719 18:15:41.330518   23130 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/proxy-client.key
	I0719 18:15:41.330534   23130 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/proxy-client.crt with IP's: []
	I0719 18:15:41.506283   23130 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/proxy-client.crt ...
	I0719 18:15:41.506310   23130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/proxy-client.crt: {Name:mk4e7cacfea97232ffdbf1aa0b96fb892e66dae1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:15:41.506460   23130 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/proxy-client.key ...
	I0719 18:15:41.506471   23130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/proxy-client.key: {Name:mkd03c28824febcacc9781048345473e8b2394e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:15:41.506625   23130 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 18:15:41.506656   23130 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 18:15:41.506675   23130 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 18:15:41.506696   23130 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 18:15:41.507246   23130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 18:15:41.531408   23130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 18:15:41.554667   23130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 18:15:41.576703   23130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 18:15:41.598385   23130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0719 18:15:41.620492   23130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 18:15:41.642746   23130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 18:15:41.664215   23130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 18:15:41.686011   23130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 18:15:41.707904   23130 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 18:15:41.723357   23130 ssh_runner.go:195] Run: openssl version
	I0719 18:15:41.728879   23130 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 18:15:41.738798   23130 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 18:15:41.742819   23130 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 18:15:41.742881   23130 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 18:15:41.748367   23130 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 18:15:41.758508   23130 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 18:15:41.762042   23130 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 18:15:41.762096   23130 kubeadm.go:392] StartCluster: {Name:addons-990895 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-990895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.239 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 18:15:41.762195   23130 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 18:15:41.762247   23130 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 18:15:41.798255   23130 cri.go:89] found id: ""
	I0719 18:15:41.798329   23130 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 18:15:41.809999   23130 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 18:15:41.828880   23130 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 18:15:41.844187   23130 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 18:15:41.844204   23130 kubeadm.go:157] found existing configuration files:
	
	I0719 18:15:41.844253   23130 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 18:15:41.854677   23130 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 18:15:41.854722   23130 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 18:15:41.867637   23130 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 18:15:41.875932   23130 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 18:15:41.875989   23130 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 18:15:41.884739   23130 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 18:15:41.893097   23130 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 18:15:41.893157   23130 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 18:15:41.901726   23130 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 18:15:41.909866   23130 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 18:15:41.909921   23130 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 18:15:41.918234   23130 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 18:15:42.098735   23130 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 18:15:50.932390   23130 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0719 18:15:50.932458   23130 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 18:15:50.932573   23130 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 18:15:50.932706   23130 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 18:15:50.932876   23130 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 18:15:50.932967   23130 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 18:15:50.934209   23130 out.go:204]   - Generating certificates and keys ...
	I0719 18:15:50.934281   23130 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 18:15:50.934345   23130 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 18:15:50.934445   23130 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0719 18:15:50.934508   23130 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0719 18:15:50.934564   23130 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0719 18:15:50.934609   23130 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0719 18:15:50.934677   23130 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0719 18:15:50.934835   23130 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-990895 localhost] and IPs [192.168.39.239 127.0.0.1 ::1]
	I0719 18:15:50.934910   23130 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0719 18:15:50.935067   23130 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-990895 localhost] and IPs [192.168.39.239 127.0.0.1 ::1]
	I0719 18:15:50.935152   23130 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0719 18:15:50.935245   23130 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0719 18:15:50.935317   23130 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0719 18:15:50.935397   23130 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 18:15:50.935472   23130 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 18:15:50.935545   23130 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 18:15:50.935620   23130 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 18:15:50.935711   23130 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 18:15:50.935787   23130 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 18:15:50.935919   23130 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 18:15:50.936041   23130 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 18:15:50.937240   23130 out.go:204]   - Booting up control plane ...
	I0719 18:15:50.937317   23130 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 18:15:50.937392   23130 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 18:15:50.937447   23130 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 18:15:50.937572   23130 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 18:15:50.937685   23130 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 18:15:50.937743   23130 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 18:15:50.937889   23130 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 18:15:50.937977   23130 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 18:15:50.938049   23130 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.887458ms
	I0719 18:15:50.938126   23130 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 18:15:50.938181   23130 kubeadm.go:310] [api-check] The API server is healthy after 4.506542904s
	I0719 18:15:50.938282   23130 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 18:15:50.938422   23130 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 18:15:50.938507   23130 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 18:15:50.938690   23130 kubeadm.go:310] [mark-control-plane] Marking the node addons-990895 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 18:15:50.938772   23130 kubeadm.go:310] [bootstrap-token] Using token: s59p6j.odilrpyh2nf3e6fm
	I0719 18:15:50.939990   23130 out.go:204]   - Configuring RBAC rules ...
	I0719 18:15:50.940102   23130 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 18:15:50.940180   23130 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 18:15:50.940298   23130 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 18:15:50.940403   23130 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 18:15:50.940496   23130 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 18:15:50.940571   23130 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 18:15:50.940691   23130 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 18:15:50.940733   23130 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 18:15:50.940772   23130 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 18:15:50.940778   23130 kubeadm.go:310] 
	I0719 18:15:50.940832   23130 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 18:15:50.940850   23130 kubeadm.go:310] 
	I0719 18:15:50.940917   23130 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 18:15:50.940924   23130 kubeadm.go:310] 
	I0719 18:15:50.940973   23130 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 18:15:50.941055   23130 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 18:15:50.941127   23130 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 18:15:50.941138   23130 kubeadm.go:310] 
	I0719 18:15:50.941203   23130 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 18:15:50.941217   23130 kubeadm.go:310] 
	I0719 18:15:50.941270   23130 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 18:15:50.941277   23130 kubeadm.go:310] 
	I0719 18:15:50.941320   23130 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 18:15:50.941387   23130 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 18:15:50.941449   23130 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 18:15:50.941455   23130 kubeadm.go:310] 
	I0719 18:15:50.941533   23130 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 18:15:50.941598   23130 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 18:15:50.941604   23130 kubeadm.go:310] 
	I0719 18:15:50.941701   23130 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token s59p6j.odilrpyh2nf3e6fm \
	I0719 18:15:50.941795   23130 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 \
	I0719 18:15:50.941814   23130 kubeadm.go:310] 	--control-plane 
	I0719 18:15:50.941820   23130 kubeadm.go:310] 
	I0719 18:15:50.941900   23130 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 18:15:50.941909   23130 kubeadm.go:310] 
	I0719 18:15:50.941984   23130 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token s59p6j.odilrpyh2nf3e6fm \
	I0719 18:15:50.942078   23130 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 
	I0719 18:15:50.942086   23130 cni.go:84] Creating CNI manager for ""
	I0719 18:15:50.942093   23130 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 18:15:50.943397   23130 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 18:15:50.944436   23130 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 18:15:50.954378   23130 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 18:15:50.971087   23130 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 18:15:50.971197   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:15:50.971208   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-990895 minikube.k8s.io/updated_at=2024_07_19T18_15_50_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221 minikube.k8s.io/name=addons-990895 minikube.k8s.io/primary=true
	I0719 18:15:50.997148   23130 ops.go:34] apiserver oom_adj: -16
	I0719 18:15:51.103714   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:15:51.604052   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:15:52.104723   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:15:52.604363   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:15:53.104422   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:15:53.603765   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:15:54.104309   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:15:54.604472   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:15:55.103979   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:15:55.604134   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:15:56.103798   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:15:56.604410   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:15:57.104186   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:15:57.604716   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:15:58.104023   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:15:58.603897   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:15:59.103828   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:15:59.604741   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:16:00.103930   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:16:00.603712   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:16:01.104591   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:16:01.604130   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:16:02.104106   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:16:02.604527   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:16:03.103861   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:16:03.604505   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:16:04.103829   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:16:04.213080   23130 kubeadm.go:1113] duration metric: took 13.241943482s to wait for elevateKubeSystemPrivileges
	I0719 18:16:04.213118   23130 kubeadm.go:394] duration metric: took 22.451027119s to StartCluster
	I0719 18:16:04.213141   23130 settings.go:142] acquiring lock: {Name:mkcf95271f2ac91a55c2f4ae0f8a0ccaa24777be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:16:04.213272   23130 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 18:16:04.213859   23130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/kubeconfig: {Name:mk0bbb5f0de0e816a151363ad4427bbf9de52f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:16:04.214080   23130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0719 18:16:04.214113   23130 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.239 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 18:16:04.214196   23130 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0719 18:16:04.214299   23130 config.go:182] Loaded profile config "addons-990895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:16:04.214314   23130 addons.go:69] Setting registry=true in profile "addons-990895"
	I0719 18:16:04.214298   23130 addons.go:69] Setting yakd=true in profile "addons-990895"
	I0719 18:16:04.214326   23130 addons.go:69] Setting default-storageclass=true in profile "addons-990895"
	I0719 18:16:04.214357   23130 addons.go:234] Setting addon registry=true in "addons-990895"
	I0719 18:16:04.214363   23130 addons.go:234] Setting addon yakd=true in "addons-990895"
	I0719 18:16:04.214352   23130 addons.go:69] Setting cloud-spanner=true in profile "addons-990895"
	I0719 18:16:04.214370   23130 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-990895"
	I0719 18:16:04.214306   23130 addons.go:69] Setting gcp-auth=true in profile "addons-990895"
	I0719 18:16:04.214395   23130 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-990895"
	I0719 18:16:04.214405   23130 mustload.go:65] Loading cluster: addons-990895
	I0719 18:16:04.214407   23130 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-990895"
	I0719 18:16:04.214408   23130 addons.go:234] Setting addon cloud-spanner=true in "addons-990895"
	I0719 18:16:04.214426   23130 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-990895"
	I0719 18:16:04.214449   23130 host.go:66] Checking if "addons-990895" exists ...
	I0719 18:16:04.214455   23130 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-990895"
	I0719 18:16:04.214462   23130 host.go:66] Checking if "addons-990895" exists ...
	I0719 18:16:04.214548   23130 config.go:182] Loaded profile config "addons-990895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:16:04.214813   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.214832   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.214830   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.214863   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.214883   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.214884   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.214917   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.214925   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.214973   23130 addons.go:69] Setting helm-tiller=true in profile "addons-990895"
	I0719 18:16:04.214974   23130 addons.go:69] Setting volumesnapshots=true in profile "addons-990895"
	I0719 18:16:04.215003   23130 addons.go:234] Setting addon helm-tiller=true in "addons-990895"
	I0719 18:16:04.215037   23130 addons.go:69] Setting ingress-dns=true in profile "addons-990895"
	I0719 18:16:04.215068   23130 addons.go:69] Setting metrics-server=true in profile "addons-990895"
	I0719 18:16:04.215042   23130 addons.go:69] Setting inspektor-gadget=true in profile "addons-990895"
	I0719 18:16:04.215110   23130 addons.go:234] Setting addon metrics-server=true in "addons-990895"
	I0719 18:16:04.215126   23130 addons.go:234] Setting addon inspektor-gadget=true in "addons-990895"
	I0719 18:16:04.215140   23130 host.go:66] Checking if "addons-990895" exists ...
	I0719 18:16:04.214883   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.215191   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.215005   23130 addons.go:234] Setting addon volumesnapshots=true in "addons-990895"
	I0719 18:16:04.215493   23130 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-990895"
	I0719 18:16:04.215702   23130 host.go:66] Checking if "addons-990895" exists ...
	I0719 18:16:04.215749   23130 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-990895"
	I0719 18:16:04.215791   23130 host.go:66] Checking if "addons-990895" exists ...
	I0719 18:16:04.214389   23130 host.go:66] Checking if "addons-990895" exists ...
	I0719 18:16:04.214398   23130 addons.go:69] Setting storage-provisioner=true in profile "addons-990895"
	I0719 18:16:04.216057   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.216078   23130 addons.go:234] Setting addon storage-provisioner=true in "addons-990895"
	I0719 18:16:04.216082   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.216111   23130 host.go:66] Checking if "addons-990895" exists ...
	I0719 18:16:04.215702   23130 host.go:66] Checking if "addons-990895" exists ...
	I0719 18:16:04.216210   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.216227   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.216245   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.216258   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.215024   23130 addons.go:69] Setting ingress=true in profile "addons-990895"
	I0719 18:16:04.215058   23130 host.go:66] Checking if "addons-990895" exists ...
	I0719 18:16:04.227003   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.227027   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.231222   23130 out.go:177] * Verifying Kubernetes components...
	I0719 18:16:04.231417   23130 addons.go:234] Setting addon ingress=true in "addons-990895"
	I0719 18:16:04.215131   23130 addons.go:234] Setting addon ingress-dns=true in "addons-990895"
	I0719 18:16:04.231484   23130 host.go:66] Checking if "addons-990895" exists ...
	I0719 18:16:04.231534   23130 host.go:66] Checking if "addons-990895" exists ...
	I0719 18:16:04.231694   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.231731   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.232020   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.232046   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.232070   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.232091   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.214389   23130 host.go:66] Checking if "addons-990895" exists ...
	I0719 18:16:04.233595   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.233643   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.234643   23130 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 18:16:04.215482   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.234766   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.240119   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37631
	I0719 18:16:04.240153   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37139
	I0719 18:16:04.240195   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35653
	I0719 18:16:04.240120   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45527
	I0719 18:16:04.215016   23130 addons.go:69] Setting volcano=true in profile "addons-990895"
	I0719 18:16:04.240435   23130 addons.go:234] Setting addon volcano=true in "addons-990895"
	I0719 18:16:04.240508   23130 host.go:66] Checking if "addons-990895" exists ...
	I0719 18:16:04.240665   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.240714   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.241061   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.241103   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.241880   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.241901   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.241951   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.242038   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.242148   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.242158   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.242578   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.242596   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.242731   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.242829   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.242849   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.242923   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:16:04.242974   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.243115   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.243170   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.243218   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:16:04.243381   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:16:04.245183   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.245210   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.246016   23130 host.go:66] Checking if "addons-990895" exists ...
	I0719 18:16:04.246478   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.246523   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.254589   23130 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-990895"
	I0719 18:16:04.254652   23130 host.go:66] Checking if "addons-990895" exists ...
	I0719 18:16:04.255103   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.255148   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.261302   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.262759   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42359
	I0719 18:16:04.262912   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42005
	I0719 18:16:04.264433   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.266387   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42257
	I0719 18:16:04.280774   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34743
	I0719 18:16:04.281368   23130 addons.go:234] Setting addon default-storageclass=true in "addons-990895"
	I0719 18:16:04.281404   23130 host.go:66] Checking if "addons-990895" exists ...
	I0719 18:16:04.281484   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.281817   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.281842   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.282245   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.282263   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.282319   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.282318   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.282462   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.282520   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40647
	I0719 18:16:04.282624   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45247
	I0719 18:16:04.282766   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.282907   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.282918   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.283230   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.283303   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.283319   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.283542   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.283881   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.283903   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.284034   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.284043   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.284160   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.284198   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.284203   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.284408   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.284869   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.284906   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.288607   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45675
	I0719 18:16:04.288633   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.288683   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.288699   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.289057   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.289123   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.289201   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.289252   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.289375   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.289487   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.289500   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.289585   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.289605   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.289817   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36591
	I0719 18:16:04.290212   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.290426   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.290920   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.290949   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.291179   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.291191   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.291302   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.291504   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.291682   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:16:04.291722   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44207
	I0719 18:16:04.291873   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.292240   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.292310   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.292350   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.293226   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.293276   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.293866   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.293931   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37623
	I0719 18:16:04.294401   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.294429   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.294661   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:16:04.295337   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.295931   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.295977   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.296328   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.296712   23130 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0719 18:16:04.297029   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.297085   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.298194   23130 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0719 18:16:04.298210   23130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0719 18:16:04.298227   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:16:04.301558   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.301976   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:16:04.302009   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.302215   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:16:04.302352   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:16:04.302445   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:16:04.302533   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	I0719 18:16:04.310105   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41093
	I0719 18:16:04.310700   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36453
	I0719 18:16:04.310871   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.311376   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.311398   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.311717   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.311900   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:16:04.312123   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.312702   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.312756   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.313110   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.313481   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36705
	I0719 18:16:04.313658   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:16:04.313815   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.313881   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.313962   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.314468   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.314490   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.314897   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.315289   23130 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0719 18:16:04.315475   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.315494   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.316787   23130 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0719 18:16:04.316811   23130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0719 18:16:04.316826   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:16:04.318246   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40413
	I0719 18:16:04.318779   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.319239   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.319256   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.319595   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.319777   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:16:04.320271   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.320976   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:16:04.321370   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.321725   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:16:04.321799   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:16:04.322056   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:16:04.322199   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:16:04.322337   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	I0719 18:16:04.323585   23130 out.go:177]   - Using image docker.io/registry:2.8.3
	I0719 18:16:04.324696   23130 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0719 18:16:04.325189   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41469
	I0719 18:16:04.325700   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.325900   23130 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0719 18:16:04.325915   23130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0719 18:16:04.325931   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:16:04.326421   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.326439   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.327175   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.327884   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:16:04.329679   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:16:04.329743   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.329769   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:16:04.329789   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.330155   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:16:04.330328   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:16:04.330474   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	I0719 18:16:04.334231   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36695
	I0719 18:16:04.334692   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.335213   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.335235   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.335571   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.335732   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:16:04.337179   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:16:04.338583   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45157
	I0719 18:16:04.338916   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.339039   23130 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0719 18:16:04.339402   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.339419   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.339728   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.339929   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:16:04.340917   23130 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 18:16:04.340937   23130 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 18:16:04.340956   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:16:04.341730   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46009
	I0719 18:16:04.342037   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.342513   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.342529   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.342599   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33283
	I0719 18:16:04.343096   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.343524   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.343538   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.343860   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.344214   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.344233   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.344700   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46869
	I0719 18:16:04.344958   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.345013   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35705
	I0719 18:16:04.345103   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.345540   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.345556   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.345564   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.345567   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.345993   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.346161   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.346320   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:16:04.346551   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.346576   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.346632   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:16:04.346968   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.347192   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:16:04.347045   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.347445   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:16:04.347481   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.347818   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:16:04.348016   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:16:04.348226   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:16:04.348337   23130 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0719 18:16:04.348406   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	I0719 18:16:04.348874   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:16:04.349679   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:16:04.349870   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:04.349890   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:04.350173   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:04.350184   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:04.350194   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:04.350201   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:04.350470   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:04.350515   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:04.350525   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:04.350533   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	W0719 18:16:04.350602   23130 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0719 18:16:04.350819   23130 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0719 18:16:04.350819   23130 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0719 18:16:04.351599   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38447
	I0719 18:16:04.352945   23130 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0719 18:16:04.352961   23130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0719 18:16:04.352978   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:16:04.353782   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35361
	I0719 18:16:04.354320   23130 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0719 18:16:04.354855   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.355415   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.355792   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.355809   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.356003   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.356023   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.356330   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.356377   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.356514   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.356611   23130 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0719 18:16:04.356731   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:16:04.356793   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:16:04.357309   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.357227   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.357348   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.357251   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:16:04.357720   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:16:04.357843   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:16:04.357959   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	I0719 18:16:04.358210   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39687
	I0719 18:16:04.358547   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.358673   23130 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0719 18:16:04.358986   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.358996   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.359280   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.359419   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:16:04.361107   23130 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0719 18:16:04.361261   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:16:04.362071   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:16:04.362135   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35949
	I0719 18:16:04.362617   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.363071   23130 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 18:16:04.363092   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.363484   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.363750   23130 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0719 18:16:04.363776   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.363811   23130 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0719 18:16:04.364039   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:16:04.364956   23130 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 18:16:04.364972   23130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 18:16:04.364989   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:16:04.365087   23130 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0719 18:16:04.365098   23130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0719 18:16:04.365111   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:16:04.365859   23130 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0719 18:16:04.366689   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36925
	I0719 18:16:04.366909   23130 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0719 18:16:04.366924   23130 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0719 18:16:04.366940   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:16:04.367976   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.368670   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.368686   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.369105   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.369307   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:16:04.369487   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:16:04.370061   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35451
	I0719 18:16:04.370427   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.370740   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.371096   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.371111   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.371275   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:16:04.371303   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.371464   23130 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0719 18:16:04.371494   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.371548   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:16:04.371885   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:16:04.371921   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:16:04.372090   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:16:04.372145   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:16:04.372468   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	I0719 18:16:04.372767   23130 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0719 18:16:04.372780   23130 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0719 18:16:04.372804   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:16:04.373208   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43769
	I0719 18:16:04.373598   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:16:04.373707   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.373896   23130 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 18:16:04.373907   23130 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 18:16:04.373921   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:16:04.374011   23130 out.go:177]   - Using image docker.io/busybox:stable
	I0719 18:16:04.374913   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.374930   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.375775   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.376147   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:16:04.376212   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.376420   23130 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0719 18:16:04.376833   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:16:04.376852   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.376988   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:16:04.377247   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38251
	I0719 18:16:04.377372   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.377423   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.377483   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:16:04.377614   23130 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0719 18:16:04.377632   23130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0719 18:16:04.377648   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:16:04.377664   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:16:04.377851   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	I0719 18:16:04.378127   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.378155   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:16:04.378169   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.378202   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.378674   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.378697   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.378743   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:16:04.378760   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.378764   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:16:04.378905   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:16:04.378964   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:16:04.378982   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.379178   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:16:04.379317   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:16:04.379375   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:16:04.379558   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	I0719 18:16:04.379804   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:16:04.379810   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:16:04.379870   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.380040   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:16:04.380247   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:16:04.380295   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:16:04.380344   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:16:04.380428   23130 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0719 18:16:04.380598   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	I0719 18:16:04.380855   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	I0719 18:16:04.381248   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.381636   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:16:04.381654   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.381768   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:16:04.381900   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:16:04.381954   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:16:04.382128   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:16:04.382291   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	I0719 18:16:04.382841   23130 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0719 18:16:04.383539   23130 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0719 18:16:04.384895   23130 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0719 18:16:04.384895   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33247
	I0719 18:16:04.384968   23130 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0719 18:16:04.384985   23130 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0719 18:16:04.385003   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:16:04.385828   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.386295   23130 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0719 18:16:04.386309   23130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0719 18:16:04.386324   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:16:04.386324   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.386347   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.386725   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.386873   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:16:04.388331   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:16:04.388480   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.388825   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:16:04.388851   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.388984   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:16:04.389131   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:16:04.389248   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:16:04.389401   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	I0719 18:16:04.389847   23130 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0719 18:16:04.390208   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.390749   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:16:04.390774   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.390915   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:16:04.391072   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:16:04.391121   23130 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0719 18:16:04.391132   23130 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0719 18:16:04.391145   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:16:04.391192   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:16:04.391337   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	I0719 18:16:04.393528   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.393826   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:16:04.393850   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.393960   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:16:04.394114   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:16:04.394239   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:16:04.394358   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	W0719 18:16:04.416108   23130 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:39062->192.168.39.239:22: read: connection reset by peer
	I0719 18:16:04.416159   23130 retry.go:31] will retry after 361.4719ms: ssh: handshake failed: read tcp 192.168.39.1:39062->192.168.39.239:22: read: connection reset by peer
	I0719 18:16:04.626774   23130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0719 18:16:04.626897   23130 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 18:16:04.630972   23130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0719 18:16:04.713646   23130 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0719 18:16:04.713667   23130 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0719 18:16:04.724415   23130 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0719 18:16:04.724436   23130 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0719 18:16:04.779865   23130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0719 18:16:04.797110   23130 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0719 18:16:04.797140   23130 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0719 18:16:04.800986   23130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0719 18:16:04.824085   23130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 18:16:04.838707   23130 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0719 18:16:04.838730   23130 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0719 18:16:04.841405   23130 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0719 18:16:04.841421   23130 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0719 18:16:04.848239   23130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0719 18:16:04.852566   23130 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0719 18:16:04.852594   23130 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0719 18:16:04.861165   23130 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 18:16:04.861186   23130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0719 18:16:04.867479   23130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 18:16:04.877043   23130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0719 18:16:04.890610   23130 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0719 18:16:04.890632   23130 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0719 18:16:04.964973   23130 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0719 18:16:04.965005   23130 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0719 18:16:04.970390   23130 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0719 18:16:04.970409   23130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0719 18:16:04.995412   23130 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0719 18:16:04.995437   23130 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0719 18:16:04.996525   23130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0719 18:16:05.015066   23130 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 18:16:05.015090   23130 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 18:16:05.023595   23130 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0719 18:16:05.023618   23130 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0719 18:16:05.113664   23130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0719 18:16:05.125737   23130 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0719 18:16:05.125768   23130 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0719 18:16:05.126857   23130 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0719 18:16:05.126877   23130 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0719 18:16:05.166847   23130 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 18:16:05.166877   23130 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 18:16:05.191740   23130 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0719 18:16:05.191765   23130 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0719 18:16:05.283087   23130 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0719 18:16:05.283113   23130 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0719 18:16:05.303323   23130 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0719 18:16:05.303357   23130 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0719 18:16:05.328468   23130 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0719 18:16:05.328497   23130 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0719 18:16:05.363121   23130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 18:16:05.402045   23130 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0719 18:16:05.402070   23130 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0719 18:16:05.423264   23130 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0719 18:16:05.423287   23130 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0719 18:16:05.451822   23130 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0719 18:16:05.451856   23130 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0719 18:16:05.467430   23130 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0719 18:16:05.467456   23130 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0719 18:16:05.564649   23130 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 18:16:05.564667   23130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0719 18:16:05.576712   23130 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0719 18:16:05.576730   23130 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0719 18:16:05.645465   23130 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0719 18:16:05.645485   23130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0719 18:16:05.695565   23130 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0719 18:16:05.695588   23130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0719 18:16:05.818884   23130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 18:16:05.854288   23130 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0719 18:16:05.854309   23130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0719 18:16:05.867893   23130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0719 18:16:05.874861   23130 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0719 18:16:05.874889   23130 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0719 18:16:06.036660   23130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0719 18:16:06.081365   23130 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0719 18:16:06.081390   23130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0719 18:16:06.097823   23130 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.470896921s)
	I0719 18:16:06.097823   23130 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.471000797s)
	I0719 18:16:06.097948   23130 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0719 18:16:06.098794   23130 node_ready.go:35] waiting up to 6m0s for node "addons-990895" to be "Ready" ...
	I0719 18:16:06.102699   23130 node_ready.go:49] node "addons-990895" has status "Ready":"True"
	I0719 18:16:06.102718   23130 node_ready.go:38] duration metric: took 3.875988ms for node "addons-990895" to be "Ready" ...
	I0719 18:16:06.102725   23130 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 18:16:06.120945   23130 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jwrch" in "kube-system" namespace to be "Ready" ...
	I0719 18:16:06.380801   23130 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0719 18:16:06.380828   23130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0719 18:16:06.601963   23130 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-990895" context rescaled to 1 replicas
	I0719 18:16:06.641587   23130 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0719 18:16:06.641613   23130 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0719 18:16:06.787999   23130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.156992425s)
	I0719 18:16:06.788059   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:06.788070   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:06.788408   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:06.788451   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:06.788468   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:06.788481   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:06.788491   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:06.788763   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:06.788780   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:06.788792   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:06.826901   23130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0719 18:16:08.254304   23130 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwrch" in "kube-system" namespace has status "Ready":"False"
	I0719 18:16:08.982958   23130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.203060529s)
	I0719 18:16:08.982979   23130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.181964227s)
	I0719 18:16:08.983007   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:08.983020   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:08.983020   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:08.983033   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:08.983052   23130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.158938631s)
	I0719 18:16:08.983083   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:08.983099   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:08.983129   23130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.134867104s)
	I0719 18:16:08.983160   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:08.983173   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:08.983191   23130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.11569029s)
	I0719 18:16:08.983221   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:08.983232   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:08.983624   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:08.983640   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:08.983650   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:08.983659   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:08.983756   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:08.983783   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:08.983791   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:08.983799   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:08.983806   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:08.983906   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:08.983943   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:08.983959   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:08.983976   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:08.983993   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:08.984061   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:08.984071   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:08.984078   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:08.984085   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:08.984142   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:08.984180   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:08.984197   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:08.984214   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:08.984229   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:08.984579   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:08.984600   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:08.984878   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:08.984933   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:08.984951   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:08.985578   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:08.985609   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:08.985616   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:08.985788   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:08.985821   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:08.985868   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:08.985850   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:08.985927   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:09.082357   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:09.082382   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:09.082728   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:09.082747   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:09.082754   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	W0719 18:16:09.082841   23130 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0719 18:16:09.101312   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:09.101337   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:09.101749   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:09.101757   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:09.101770   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:10.632000   23130 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwrch" in "kube-system" namespace has status "Ready":"False"
	I0719 18:16:11.378301   23130 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0719 18:16:11.378334   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:16:11.381815   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:11.382226   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:16:11.382248   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:11.382500   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:16:11.382698   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:16:11.382880   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:16:11.383026   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	I0719 18:16:11.574790   23130 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0719 18:16:11.670857   23130 addons.go:234] Setting addon gcp-auth=true in "addons-990895"
	I0719 18:16:11.670922   23130 host.go:66] Checking if "addons-990895" exists ...
	I0719 18:16:11.671317   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:11.671345   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:11.686902   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40097
	I0719 18:16:11.687296   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:11.687759   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:11.687782   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:11.688104   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:11.688578   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:11.688604   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:11.702757   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41719
	I0719 18:16:11.703200   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:11.703774   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:11.703793   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:11.704124   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:11.704446   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:16:11.706060   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:16:11.706266   23130 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0719 18:16:11.706285   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:16:11.709185   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:11.709648   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:16:11.709675   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:11.709827   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:16:11.709995   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:16:11.710173   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:16:11.710365   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	I0719 18:16:12.304965   23130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.427887631s)
	I0719 18:16:12.305017   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:12.305019   23130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.308470617s)
	I0719 18:16:12.305028   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:12.305041   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:12.305053   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:12.305097   23130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.191394157s)
	I0719 18:16:12.305144   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:12.305157   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:12.305155   23130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.942005141s)
	I0719 18:16:12.305180   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:12.305189   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:12.305253   23130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.486339917s)
	W0719 18:16:12.305280   23130 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0719 18:16:12.305281   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:12.305296   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:12.305310   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:12.305320   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:12.305317   23130 retry.go:31] will retry after 248.791756ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0719 18:16:12.305394   23130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.437470306s)
	I0719 18:16:12.305420   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:12.305436   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:12.305469   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:12.305489   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:12.305498   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:12.305506   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:12.305511   23130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.268821522s)
	I0719 18:16:12.305526   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:12.305533   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:12.305601   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:12.305632   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:12.305640   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:12.305642   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:12.305648   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:12.305659   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:12.305711   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:12.305718   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:12.305727   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:12.305733   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:12.305794   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:12.305803   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:12.305813   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:12.305812   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:12.305820   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:12.305868   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:12.305889   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:12.305890   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:12.305898   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:12.305908   23130 addons.go:475] Verifying addon ingress=true in "addons-990895"
	I0719 18:16:12.305911   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:12.305917   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:12.306026   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:12.306035   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:12.306062   23130 addons.go:475] Verifying addon metrics-server=true in "addons-990895"
	I0719 18:16:12.306114   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:12.306231   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:12.306239   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:12.306005   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:12.306320   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:12.306350   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:12.306356   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:12.306364   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:12.306370   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:12.307388   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:12.307416   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:12.307423   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:12.307431   23130 addons.go:475] Verifying addon registry=true in "addons-990895"
	I0719 18:16:12.308487   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:12.308512   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:12.308519   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:12.309678   23130 out.go:177] * Verifying ingress addon...
	I0719 18:16:12.309705   23130 out.go:177] * Verifying registry addon...
	I0719 18:16:12.309710   23130 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-990895 service yakd-dashboard -n yakd-dashboard
	
	I0719 18:16:12.311673   23130 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0719 18:16:12.311863   23130 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0719 18:16:12.319304   23130 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0719 18:16:12.319327   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:12.319368   23130 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0719 18:16:12.319383   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:12.555314   23130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 18:16:12.644659   23130 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwrch" in "kube-system" namespace has status "Ready":"False"
	I0719 18:16:12.824859   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:12.842719   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:12.852759   23130 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.146471529s)
	I0719 18:16:12.852812   23130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.025865762s)
	I0719 18:16:12.852854   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:12.852866   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:12.853103   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:12.853120   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:12.853130   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:12.853138   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:12.853425   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:12.853459   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:12.853472   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:12.853484   23130 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-990895"
	I0719 18:16:12.854245   23130 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0719 18:16:12.855081   23130 out.go:177] * Verifying csi-hostpath-driver addon...
	I0719 18:16:12.856574   23130 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0719 18:16:12.857251   23130 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0719 18:16:12.857823   23130 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0719 18:16:12.857837   23130 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0719 18:16:12.886317   23130 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0719 18:16:12.886345   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:12.956078   23130 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0719 18:16:12.956100   23130 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0719 18:16:13.011490   23130 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0719 18:16:13.011513   23130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0719 18:16:13.060502   23130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0719 18:16:13.321141   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:13.322776   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:13.368340   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:13.817001   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:13.819914   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:13.862984   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:13.952016   23130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.396649876s)
	I0719 18:16:13.952076   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:13.952093   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:13.952391   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:13.952414   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:13.952425   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:13.952435   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:13.952440   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:13.952696   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:13.952715   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:14.355904   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:14.399725   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:14.400325   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:14.541439   23130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.48088954s)
	I0719 18:16:14.541497   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:14.541514   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:14.541837   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:14.541896   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:14.541909   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:14.541921   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:14.541931   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:14.542148   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:14.542165   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:14.542193   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:14.543406   23130 addons.go:475] Verifying addon gcp-auth=true in "addons-990895"
	I0719 18:16:14.545780   23130 out.go:177] * Verifying gcp-auth addon...
	I0719 18:16:14.547531   23130 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0719 18:16:14.559467   23130 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0719 18:16:14.559483   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:14.819034   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:14.819705   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:14.864931   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:15.052919   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:15.128652   23130 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwrch" in "kube-system" namespace has status "Ready":"False"
	I0719 18:16:15.317950   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:15.318755   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:15.362545   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:15.551707   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:15.816832   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:15.816954   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:15.862023   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:16.051431   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:16.317449   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:16.318272   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:16.362507   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:16.551610   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:16.817007   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:16.817090   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:16.862740   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:17.050818   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:17.316273   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:17.316837   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:17.363280   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:17.552315   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:17.627357   23130 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwrch" in "kube-system" namespace has status "Ready":"False"
	I0719 18:16:17.816631   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:17.816704   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:17.865746   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:18.051216   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:18.317681   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:18.318498   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:18.363119   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:18.551646   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:18.816012   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:18.816953   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:18.862656   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:19.050697   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:19.317201   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:19.317500   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:19.362883   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:19.552847   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:19.819909   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:19.819938   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:19.866258   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:20.050748   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:20.130088   23130 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwrch" in "kube-system" namespace has status "Ready":"False"
	I0719 18:16:20.318306   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:20.320886   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:20.362123   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:20.551727   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:20.817251   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:20.817734   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:20.863089   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:21.052141   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:21.317551   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:21.319537   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:21.364102   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:21.550903   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:21.816558   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:21.817139   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:21.862741   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:22.051238   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:22.316224   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:22.317481   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:22.362765   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:22.551985   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:22.629159   23130 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwrch" in "kube-system" namespace has status "Ready":"False"
	I0719 18:16:22.816837   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:22.816999   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:22.862454   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:23.050790   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:23.316818   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:23.317166   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:23.363018   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:23.551044   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:23.817713   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:23.821090   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:23.862480   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:24.050929   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:24.317208   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:24.317429   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:24.362442   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:24.551587   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:24.816854   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:24.816888   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:24.862528   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:25.051911   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:25.127502   23130 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwrch" in "kube-system" namespace has status "Ready":"False"
	I0719 18:16:25.316959   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:25.317349   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:25.363011   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:25.551190   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:25.816951   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:25.817523   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:25.862334   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:26.051678   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:26.316466   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:26.316468   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:26.362638   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:26.551088   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:26.816198   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:26.816473   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:26.862602   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:27.051163   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:27.316761   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:27.316838   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:27.362408   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:27.551767   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:27.626969   23130 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwrch" in "kube-system" namespace has status "Ready":"False"
	I0719 18:16:27.816479   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:27.817759   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:27.861832   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:28.051306   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:28.316938   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:28.317478   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:28.362880   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:28.551629   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:28.816939   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:28.817222   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:28.862101   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:29.052315   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:29.317760   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:29.319395   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:29.361885   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:29.551177   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:29.816222   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:29.816333   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:29.863418   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:30.051621   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:30.126982   23130 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwrch" in "kube-system" namespace has status "Ready":"False"
	I0719 18:16:30.316869   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:30.317205   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:30.370889   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:30.551527   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:30.816461   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:30.818197   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:30.863576   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:31.051275   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:31.320281   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:31.325063   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:31.362665   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:31.551147   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:31.817499   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:31.819075   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:31.863068   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:32.051522   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:32.319684   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:32.320106   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:32.365589   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:32.550829   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:32.627428   23130 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwrch" in "kube-system" namespace has status "Ready":"False"
	I0719 18:16:32.817760   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:32.817952   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:32.862024   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:33.051684   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:33.317900   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:33.320593   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:33.363423   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:33.551381   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:33.817770   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:33.818044   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:33.865768   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:34.051190   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:34.316611   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:34.316611   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:34.366207   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:34.551421   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:34.817136   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:34.817233   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:35.150325   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:35.152471   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:35.153818   23130 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwrch" in "kube-system" namespace has status "Ready":"False"
	I0719 18:16:35.316655   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:35.318319   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:35.363080   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:35.551140   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:35.816891   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:35.817029   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:35.862820   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:36.051340   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:36.317547   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:36.318161   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:36.362339   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:36.551499   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:36.816522   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:36.816602   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:36.862771   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:37.051363   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:37.317264   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:37.317286   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:37.362758   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:37.551024   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:37.632369   23130 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwrch" in "kube-system" namespace has status "Ready":"False"
	I0719 18:16:37.816376   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:37.816658   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:37.861934   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:38.051585   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:38.317147   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:38.318645   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:38.363070   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:38.551463   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:38.816947   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:38.818274   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:38.863018   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:39.051203   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:39.316455   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:39.317389   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:39.362301   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:39.551334   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:39.818326   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:39.822073   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:39.862734   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:40.051342   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:40.126991   23130 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwrch" in "kube-system" namespace has status "Ready":"False"
	I0719 18:16:40.316712   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:40.317266   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:40.362875   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:40.551394   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:40.816524   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:40.818270   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:40.862767   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:41.051712   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:41.315730   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:41.316462   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:41.362509   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:41.552060   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:42.022090   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:42.022982   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:42.024016   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:42.051048   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:42.128190   23130 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwrch" in "kube-system" namespace has status "Ready":"False"
	I0719 18:16:42.316921   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:42.318160   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:42.362938   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:42.550980   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:42.816769   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:42.818187   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:42.863224   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:43.051179   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:43.316249   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:43.317510   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:43.363365   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:43.551675   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:43.818606   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:43.819189   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:43.863922   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:44.050856   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:44.323732   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:44.323787   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:44.369697   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:44.551391   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:44.626626   23130 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwrch" in "kube-system" namespace has status "Ready":"False"
	I0719 18:16:44.817539   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:44.818730   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:44.862926   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:45.056815   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:45.317905   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:45.318222   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:45.362315   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:45.551082   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:45.817397   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:45.818038   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:45.867568   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:46.055468   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:46.126800   23130 pod_ready.go:92] pod "coredns-7db6d8ff4d-jwrch" in "kube-system" namespace has status "Ready":"True"
	I0719 18:16:46.126820   23130 pod_ready.go:81] duration metric: took 40.005850885s for pod "coredns-7db6d8ff4d-jwrch" in "kube-system" namespace to be "Ready" ...
	I0719 18:16:46.126830   23130 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vcfkz" in "kube-system" namespace to be "Ready" ...
	I0719 18:16:46.128711   23130 pod_ready.go:97] error getting pod "coredns-7db6d8ff4d-vcfkz" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-vcfkz" not found
	I0719 18:16:46.128732   23130 pod_ready.go:81] duration metric: took 1.895882ms for pod "coredns-7db6d8ff4d-vcfkz" in "kube-system" namespace to be "Ready" ...
	E0719 18:16:46.128740   23130 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-7db6d8ff4d-vcfkz" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-vcfkz" not found
	I0719 18:16:46.128746   23130 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-990895" in "kube-system" namespace to be "Ready" ...
	I0719 18:16:46.134823   23130 pod_ready.go:92] pod "etcd-addons-990895" in "kube-system" namespace has status "Ready":"True"
	I0719 18:16:46.134849   23130 pod_ready.go:81] duration metric: took 6.095984ms for pod "etcd-addons-990895" in "kube-system" namespace to be "Ready" ...
	I0719 18:16:46.134861   23130 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-990895" in "kube-system" namespace to be "Ready" ...
	I0719 18:16:46.139198   23130 pod_ready.go:92] pod "kube-apiserver-addons-990895" in "kube-system" namespace has status "Ready":"True"
	I0719 18:16:46.139213   23130 pod_ready.go:81] duration metric: took 4.344392ms for pod "kube-apiserver-addons-990895" in "kube-system" namespace to be "Ready" ...
	I0719 18:16:46.139221   23130 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-990895" in "kube-system" namespace to be "Ready" ...
	I0719 18:16:46.143467   23130 pod_ready.go:92] pod "kube-controller-manager-addons-990895" in "kube-system" namespace has status "Ready":"True"
	I0719 18:16:46.143487   23130 pod_ready.go:81] duration metric: took 4.260676ms for pod "kube-controller-manager-addons-990895" in "kube-system" namespace to be "Ready" ...
	I0719 18:16:46.143496   23130 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7p5fz" in "kube-system" namespace to be "Ready" ...
	I0719 18:16:46.318373   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:46.318566   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:46.325062   23130 pod_ready.go:92] pod "kube-proxy-7p5fz" in "kube-system" namespace has status "Ready":"True"
	I0719 18:16:46.325079   23130 pod_ready.go:81] duration metric: took 181.577719ms for pod "kube-proxy-7p5fz" in "kube-system" namespace to be "Ready" ...
	I0719 18:16:46.325088   23130 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-990895" in "kube-system" namespace to be "Ready" ...
	I0719 18:16:46.362886   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:46.551445   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:46.724931   23130 pod_ready.go:92] pod "kube-scheduler-addons-990895" in "kube-system" namespace has status "Ready":"True"
	I0719 18:16:46.724961   23130 pod_ready.go:81] duration metric: took 399.865474ms for pod "kube-scheduler-addons-990895" in "kube-system" namespace to be "Ready" ...
	I0719 18:16:46.724972   23130 pod_ready.go:38] duration metric: took 40.622235448s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 18:16:46.724994   23130 api_server.go:52] waiting for apiserver process to appear ...
	I0719 18:16:46.725057   23130 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 18:16:46.757121   23130 api_server.go:72] duration metric: took 42.542967215s to wait for apiserver process to appear ...
	I0719 18:16:46.757146   23130 api_server.go:88] waiting for apiserver healthz status ...
	I0719 18:16:46.757167   23130 api_server.go:253] Checking apiserver healthz at https://192.168.39.239:8443/healthz ...
	I0719 18:16:46.760999   23130 api_server.go:279] https://192.168.39.239:8443/healthz returned 200:
	ok
	I0719 18:16:46.762005   23130 api_server.go:141] control plane version: v1.30.3
	I0719 18:16:46.762025   23130 api_server.go:131] duration metric: took 4.872839ms to wait for apiserver health ...
	I0719 18:16:46.762032   23130 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 18:16:46.819220   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:46.821402   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:46.881610   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:46.933743   23130 system_pods.go:59] 18 kube-system pods found
	I0719 18:16:46.933771   23130 system_pods.go:61] "coredns-7db6d8ff4d-jwrch" [3a132301-51a7-481d-8442-0d5c256608bd] Running
	I0719 18:16:46.933778   23130 system_pods.go:61] "csi-hostpath-attacher-0" [6041967e-faa9-4589-8fba-2d23c0bf4ed8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0719 18:16:46.933785   23130 system_pods.go:61] "csi-hostpath-resizer-0" [a1ef532c-9919-4a10-8bf3-99c69ebdcf31] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0719 18:16:46.933792   23130 system_pods.go:61] "csi-hostpathplugin-lljpc" [d10c0530-0314-425d-a850-6077fe481809] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0719 18:16:46.933797   23130 system_pods.go:61] "etcd-addons-990895" [d979a63a-0e7b-4787-9bde-978357bb3d20] Running
	I0719 18:16:46.933801   23130 system_pods.go:61] "kube-apiserver-addons-990895" [1e1c5d53-2489-4dfc-a17d-796526db2aa5] Running
	I0719 18:16:46.933805   23130 system_pods.go:61] "kube-controller-manager-addons-990895" [899afe35-74d9-4bae-b841-498dba02e08d] Running
	I0719 18:16:46.933809   23130 system_pods.go:61] "kube-ingress-dns-minikube" [396f1196-8e67-41ea-b4d4-86b6c843c63e] Running
	I0719 18:16:46.933812   23130 system_pods.go:61] "kube-proxy-7p5fz" [480bf6bd-8a9c-4e93-aa4e-1af321bcf057] Running
	I0719 18:16:46.933815   23130 system_pods.go:61] "kube-scheduler-addons-990895" [81fedc77-3d19-4c5e-bb4e-3503d864f120] Running
	I0719 18:16:46.933820   23130 system_pods.go:61] "metrics-server-c59844bb4-mg572" [15b48dfb-deb0-43b5-9a04-9d6896e8c5da] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 18:16:46.933829   23130 system_pods.go:61] "nvidia-device-plugin-daemonset-dpnm9" [a47bd84e-c957-4484-ad2b-20f5a4582ed9] Running
	I0719 18:16:46.933835   23130 system_pods.go:61] "registry-656c9c8d9c-fvxpl" [db275727-fea8-4f8b-9836-c0f394f7e085] Running
	I0719 18:16:46.933839   23130 system_pods.go:61] "registry-proxy-5sm5z" [5e03528f-ac60-402a-8478-15cca52a26f9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0719 18:16:46.933847   23130 system_pods.go:61] "snapshot-controller-745499f584-nbp46" [d6ca72ac-b155-499f-9920-a996a141f321] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 18:16:46.933853   23130 system_pods.go:61] "snapshot-controller-745499f584-pcwbg" [dc8b6b85-3529-4e88-b820-30d093e5050c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 18:16:46.933856   23130 system_pods.go:61] "storage-provisioner" [18afc862-2719-4e26-be45-80599ec151ea] Running
	I0719 18:16:46.933860   23130 system_pods.go:61] "tiller-deploy-6677d64bcd-rgpbq" [e4c9ebcd-0308-48aa-ad08-d6e84e6dbbe3] Running
	I0719 18:16:46.933865   23130 system_pods.go:74] duration metric: took 171.827786ms to wait for pod list to return data ...
	I0719 18:16:46.933871   23130 default_sa.go:34] waiting for default service account to be created ...
	I0719 18:16:47.051234   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:47.124524   23130 default_sa.go:45] found service account: "default"
	I0719 18:16:47.124548   23130 default_sa.go:55] duration metric: took 190.67009ms for default service account to be created ...
	I0719 18:16:47.124564   23130 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 18:16:47.321951   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:47.322420   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:47.333952   23130 system_pods.go:86] 18 kube-system pods found
	I0719 18:16:47.333984   23130 system_pods.go:89] "coredns-7db6d8ff4d-jwrch" [3a132301-51a7-481d-8442-0d5c256608bd] Running
	I0719 18:16:47.333998   23130 system_pods.go:89] "csi-hostpath-attacher-0" [6041967e-faa9-4589-8fba-2d23c0bf4ed8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0719 18:16:47.334009   23130 system_pods.go:89] "csi-hostpath-resizer-0" [a1ef532c-9919-4a10-8bf3-99c69ebdcf31] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0719 18:16:47.334021   23130 system_pods.go:89] "csi-hostpathplugin-lljpc" [d10c0530-0314-425d-a850-6077fe481809] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0719 18:16:47.334033   23130 system_pods.go:89] "etcd-addons-990895" [d979a63a-0e7b-4787-9bde-978357bb3d20] Running
	I0719 18:16:47.334040   23130 system_pods.go:89] "kube-apiserver-addons-990895" [1e1c5d53-2489-4dfc-a17d-796526db2aa5] Running
	I0719 18:16:47.334050   23130 system_pods.go:89] "kube-controller-manager-addons-990895" [899afe35-74d9-4bae-b841-498dba02e08d] Running
	I0719 18:16:47.334061   23130 system_pods.go:89] "kube-ingress-dns-minikube" [396f1196-8e67-41ea-b4d4-86b6c843c63e] Running
	I0719 18:16:47.334069   23130 system_pods.go:89] "kube-proxy-7p5fz" [480bf6bd-8a9c-4e93-aa4e-1af321bcf057] Running
	I0719 18:16:47.334078   23130 system_pods.go:89] "kube-scheduler-addons-990895" [81fedc77-3d19-4c5e-bb4e-3503d864f120] Running
	I0719 18:16:47.334090   23130 system_pods.go:89] "metrics-server-c59844bb4-mg572" [15b48dfb-deb0-43b5-9a04-9d6896e8c5da] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 18:16:47.334100   23130 system_pods.go:89] "nvidia-device-plugin-daemonset-dpnm9" [a47bd84e-c957-4484-ad2b-20f5a4582ed9] Running
	I0719 18:16:47.334109   23130 system_pods.go:89] "registry-656c9c8d9c-fvxpl" [db275727-fea8-4f8b-9836-c0f394f7e085] Running
	I0719 18:16:47.334121   23130 system_pods.go:89] "registry-proxy-5sm5z" [5e03528f-ac60-402a-8478-15cca52a26f9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0719 18:16:47.334175   23130 system_pods.go:89] "snapshot-controller-745499f584-nbp46" [d6ca72ac-b155-499f-9920-a996a141f321] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 18:16:47.334194   23130 system_pods.go:89] "snapshot-controller-745499f584-pcwbg" [dc8b6b85-3529-4e88-b820-30d093e5050c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 18:16:47.334205   23130 system_pods.go:89] "storage-provisioner" [18afc862-2719-4e26-be45-80599ec151ea] Running
	I0719 18:16:47.334215   23130 system_pods.go:89] "tiller-deploy-6677d64bcd-rgpbq" [e4c9ebcd-0308-48aa-ad08-d6e84e6dbbe3] Running
	I0719 18:16:47.334226   23130 system_pods.go:126] duration metric: took 209.655247ms to wait for k8s-apps to be running ...
	I0719 18:16:47.334238   23130 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 18:16:47.334296   23130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 18:16:47.364748   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:47.365451   23130 system_svc.go:56] duration metric: took 31.208011ms WaitForService to wait for kubelet
	I0719 18:16:47.365476   23130 kubeadm.go:582] duration metric: took 43.151326989s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 18:16:47.365499   23130 node_conditions.go:102] verifying NodePressure condition ...
	I0719 18:16:47.525027   23130 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 18:16:47.525050   23130 node_conditions.go:123] node cpu capacity is 2
	I0719 18:16:47.525063   23130 node_conditions.go:105] duration metric: took 159.558548ms to run NodePressure ...
	I0719 18:16:47.525074   23130 start.go:241] waiting for startup goroutines ...
	I0719 18:16:47.551166   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:47.816974   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:47.817126   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:47.862582   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:48.051109   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:48.317707   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:48.317967   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:48.362535   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:48.550912   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:48.816751   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:48.816810   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:48.862255   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:49.051533   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:49.315971   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:49.316640   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:49.362933   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:49.551102   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:49.816474   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:49.816623   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:49.862237   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:50.051162   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:50.317616   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:50.318090   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:50.362291   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:50.550882   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:50.819527   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:50.826694   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:50.862295   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:51.050917   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:51.315650   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:51.317528   23130 kapi.go:107] duration metric: took 39.005853444s to wait for kubernetes.io/minikube-addons=registry ...
	I0719 18:16:51.362844   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:51.551376   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:51.819784   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:51.865606   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:52.052168   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:52.315614   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:52.362888   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:52.551777   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:52.817085   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:52.863078   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:53.050907   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:53.315642   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:53.364292   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:53.551695   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:53.816708   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:53.867211   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:54.051305   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:54.317137   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:54.362619   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:54.551409   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:54.820850   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:54.862972   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:55.053401   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:55.316366   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:55.361907   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:55.551198   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:55.815798   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:55.867517   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:56.051259   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:56.474900   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:56.476066   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:56.552329   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:56.816320   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:56.862825   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:57.051391   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:57.316136   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:57.362170   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:57.550211   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:57.816439   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:57.864045   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:58.051555   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:58.317307   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:58.362634   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:58.550751   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:58.817096   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:58.861992   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:59.052933   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:59.316913   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:59.362407   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:59.551066   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:59.815813   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:59.864947   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:00.051156   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:00.317112   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:00.365404   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:00.556563   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:00.816208   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:00.861862   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:01.051476   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:01.316120   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:01.365138   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:01.842896   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:01.843150   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:01.861889   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:02.051885   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:02.316764   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:02.362899   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:02.551205   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:02.815517   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:02.863494   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:03.051891   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:03.317335   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:03.362004   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:03.552279   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:03.823390   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:03.865216   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:04.053146   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:04.316535   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:04.363052   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:04.550853   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:04.816202   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:04.863145   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:05.051644   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:05.316428   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:05.362525   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:05.551458   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:05.816248   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:05.862272   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:06.050564   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:06.317699   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:06.362951   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:06.551383   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:06.959189   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:06.959940   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:07.051201   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:07.315815   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:07.365167   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:07.551168   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:07.816584   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:07.862836   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:08.051325   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:08.316523   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:08.363070   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:08.552118   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:08.816202   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:08.863609   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:09.051311   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:09.322797   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:09.362702   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:09.551528   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:09.816251   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:09.862348   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:10.051576   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:10.317168   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:10.362892   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:10.670783   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:10.816910   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:10.863055   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:11.051437   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:11.316141   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:11.363022   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:11.551619   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:11.816583   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:11.865847   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:12.051178   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:12.317752   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:12.362871   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:12.551186   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:12.817028   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:12.866106   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:13.051525   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:13.316669   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:13.362824   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:13.551310   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:13.816725   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:13.862882   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:14.051395   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:14.317129   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:14.365767   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:14.551146   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:14.816102   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:14.874682   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:15.051214   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:15.316329   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:15.369634   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:15.551508   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:15.816314   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:15.862510   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:16.050926   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:16.316680   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:16.364438   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:16.551919   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:16.818629   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:16.862974   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:17.051910   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:17.315599   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:17.363096   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:17.551823   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:17.816928   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:17.862047   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:18.052572   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:18.316517   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:18.362868   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:18.550480   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:18.817688   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:18.868338   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:19.051790   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:19.317032   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:19.363119   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:19.558055   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:20.031775   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:20.032793   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:20.052920   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:20.317123   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:20.363042   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:20.550647   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:20.816161   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:20.863152   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:21.051281   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:21.318518   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:21.362973   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:21.550899   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:21.817187   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:21.863639   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:22.051103   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:22.316328   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:22.362419   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:23.040992   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:23.044282   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:23.044495   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:23.054760   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:23.317688   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:23.361898   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:23.551279   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:23.816661   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:23.863224   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:24.052723   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:24.316358   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:24.371049   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:24.551179   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:24.816475   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:24.863225   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:25.054597   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:25.316908   23130 kapi.go:107] duration metric: took 1m13.005042748s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0719 18:17:25.365061   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:25.551295   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:25.873747   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:26.455220   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:26.465831   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:26.550967   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:26.863271   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:27.051557   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:27.362976   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:27.552145   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:27.862719   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:28.054376   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:28.362643   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:28.550895   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:28.863403   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:29.051555   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:29.362330   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:29.550770   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:29.863014   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:30.051974   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:30.370707   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:30.551009   23130 kapi.go:107] duration metric: took 1m16.003475156s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0719 18:17:30.552884   23130 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-990895 cluster.
	I0719 18:17:30.554534   23130 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0719 18:17:30.555882   23130 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0719 18:17:30.863800   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:31.369378   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:31.868656   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:32.363109   23130 kapi.go:107] duration metric: took 1m19.505854784s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0719 18:17:32.364719   23130 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, ingress-dns, nvidia-device-plugin, storage-provisioner-rancher, helm-tiller, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0719 18:17:32.365888   23130 addons.go:510] duration metric: took 1m28.151695498s for enable addons: enabled=[cloud-spanner storage-provisioner ingress-dns nvidia-device-plugin storage-provisioner-rancher helm-tiller metrics-server inspektor-gadget yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0719 18:17:32.365925   23130 start.go:246] waiting for cluster config update ...
	I0719 18:17:32.365947   23130 start.go:255] writing updated cluster config ...
	I0719 18:17:32.366180   23130 ssh_runner.go:195] Run: rm -f paused
	I0719 18:17:32.415428   23130 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 18:17:32.417272   23130 out.go:177] * Done! kubectl is now configured to use "addons-990895" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 19 18:20:35 addons-990895 crio[679]: time="2024-07-19 18:20:35.394644331Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721413235394618368,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580634,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0e451469-5fbb-4040-8c2c-e77f54a24a62 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 18:20:35 addons-990895 crio[679]: time="2024-07-19 18:20:35.395172511Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=83c88dc8-3c54-415d-9604-a3f735869e1d name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:20:35 addons-990895 crio[679]: time="2024-07-19 18:20:35.395247641Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=83c88dc8-3c54-415d-9604-a3f735869e1d name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:20:35 addons-990895 crio[679]: time="2024-07-19 18:20:35.395775029Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0256ab93b757ea6e892b9ca172de7bada767616432e55efa7865227d9ef55316,PodSandboxId:79621fa66f4186c950d43e109b2ad001e6ef61aa114479906017e52c2e3c4399,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721413228225885450,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-m4qdd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 26248856-8eff-4f35-a658-945273146c0f,},Annotations:map[string]string{io.kubernetes.container.hash: ca3bfd36,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:006cea3b2166cbab72fbc2418ee4eec7acb50c8499e6449507dc2817f66f2491,PodSandboxId:9d3e46f75e8d2ee7a4bff0c81dd78ec719415de2d262c96bf2cdb43bd02fecde,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721413086837852985,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65ee0f77-c5d6-40a1-9307-10d20caec993,},Annotations:map[string]string{io.kubernet
es.container.hash: 1c06ed37,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9830b71c2bcaf1ace67d64ad1ffae76d66b8a540e5c1cd447c9a1a459a942813,PodSandboxId:2c3a546e0577e211fc294882112576f3a7b4c0b26d4ab7e410e77c5d57e35c61,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721413067762223752,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-ggx6p,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 3efb6210-449e-46c2-b799-ac0a331ba22f,},Annotations:map[string]string{io.kubernetes.container.hash: 8d62ed23,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea9cff6ef549c9e1c95d95faa0a5d4f6dc873251aa2962301418ecc8e1bd19c,PodSandboxId:c72e3404ed94f602c911c49e366f1edbd569db1ce4d3674d75dbe21f693e7e00,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721413049734840114,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-859fv,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 27335e38-f907-4c58-bb7d-35f271820bb0,},Annotations:map[string]string{io.kubernetes.container.hash: 19edcaa,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b13a25a266eb840848bcc2e3772733071b8d1af378e41742b75074b0905a4fd,PodSandboxId:20db14c9f5bb5da4ae8e3a59d1f540b3eff0eaac6e529c423e42629996aa3b58,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721413045235245246,Labels:map[string]s
tring{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xxz68,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ab9af6ca-4564-48e1-9ad5-e440e865f1a5,},Annotations:map[string]string{io.kubernetes.container.hash: 393951c1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28a1189b89e09e8da4575632b354554ad147075ab68dadf2da971ff80afae288,PodSandboxId:a316ff22c103c495bbb36a8c850045782d815dd0d09ce7fb76390511b9dabadc,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:172
1413030952875796,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9gshv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7d4585ad-0ffb-44ab-b1cd-6c7f7abacf7b,},Annotations:map[string]string{io.kubernetes.container.hash: 4b6ef6af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b39d84faccad88505a87e8e42b57ff41c0e79e9e4ea195ffca13d114bce32fb5,PodSandboxId:b10b41564ba7493ee0a0e9b77f804a0bcf48daee663c33c593f0b62dadff85a2,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,Cre
atedAt:1721413023779956278,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-z49l4,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: e63c3dbb-e45b-4a70-9ab4-47bf7d57ed45,},Annotations:map[string]string{io.kubernetes.container.hash: 2d11ec72,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb809f616606e0f7d078aa271390a7a6ec70c54c6577d8d118609f928b1b3ae,PodSandboxId:eb4d9b2e9b8ffd078be1fb36b070df74da442d4b785c5bdf3ac7fc50919aa2c8,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSp
ecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721413004331278419,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-mg572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b48dfb-deb0-43b5-9a04-9d6896e8c5da,},Annotations:map[string]string{io.kubernetes.container.hash: 9fb5ca14,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20f6f7da0a8f0aedaa398a9f1c9576af2b52c98755486768b6f7b7fa91023439,PodSandboxId:9cd816ea8024a13c3aa5893934c685aa986b15b09c6e74a42c26560bd08b0d8d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628
db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721412970387278736,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18afc862-2719-4e26-be45-80599ec151ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3b65558a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21cd6819933856224140625e54b2907b17f6814e345acc23a0494f0d6706c6c8,PodSandboxId:00a04b0e111a0d51e127bfad1e291a3eaf6717bc8c179590d49a818df0e558c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb
0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721412968676043223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jwrch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a132301-51a7-481d-8442-0d5c256608bd,},Annotations:map[string]string{io.kubernetes.container.hash: d388be4a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7d96760d467535606e3b0d4eadf8bea6d614327b475d5a969e3cb9f4c1
348f4,PodSandboxId:0c21da4194a4e49c502747e6666308a5d418e18de26772670ea188ec62b96d92,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721412966553102514,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p5fz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480bf6bd-8a9c-4e93-aa4e-1af321bcf057,},Annotations:map[string]string{io.kubernetes.container.hash: 21ea0dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8f8b45a1729c7e1f73c65ba85f66493fde939076da189770cd80cb76bf7f808,PodSandboxId:eb0b90522f2dfe
74884b0c77ed0c13e975fb86447d8e134b191d5ec758f4264f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721412945440718999,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-990895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fab1af5d38afbcdc021f600a1091666,},Annotations:map[string]string{io.kubernetes.container.hash: 206159ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c4e6fd25057f34a10ab26df93092ca32cd7198404478a6e199b7f5b5f55cd95,PodSandboxId:6c312e29dbb865321a981c1edad10226bc5420d26bf788a1b018dbf178c9c
8d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721412945434291705,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-990895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e35b61a8e413700b1f8556d95cc577c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fea3fc99757566ccb65c857a769b3271978f53e309dd719d714b6cbeca07cd64,PodSandboxId:3bca9b426aad91c7a27013a10362982ea66d35eec9fe54fe1e15b12198966c01,Metadata:&Con
tainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721412945425212638,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-990895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ef6e2e1e8506e1b74fb0ec198106b5c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa9964845eb57345cbac255a2cedb1452583f0a039141166cd96f55ac0bc399d,PodSandboxId:4b8769217aee60786cb08fdf3060f737a07a17a36d434dbff669668275b1bbe3,Met
adata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721412945433444277,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-990895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 225843ab613cce86ccbcf332bc4d011d,},Annotations:map[string]string{io.kubernetes.container.hash: fbb46eab,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=83c88dc8-3c54-415d-9604-a3f735869e1d name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:20:35 addons-990895 crio[679]: time="2024-07-19 18:20:35.433866202Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f4fb8755-18ba-4bb3-8742-f35ea63380ac name=/runtime.v1.RuntimeService/Version
	Jul 19 18:20:35 addons-990895 crio[679]: time="2024-07-19 18:20:35.433939222Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f4fb8755-18ba-4bb3-8742-f35ea63380ac name=/runtime.v1.RuntimeService/Version
	Jul 19 18:20:35 addons-990895 crio[679]: time="2024-07-19 18:20:35.435136901Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9ec32a23-5b76-48e0-8b7a-bb5702538cdd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 18:20:35 addons-990895 crio[679]: time="2024-07-19 18:20:35.436520229Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721413235436490463,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580634,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9ec32a23-5b76-48e0-8b7a-bb5702538cdd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 18:20:35 addons-990895 crio[679]: time="2024-07-19 18:20:35.437229510Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=48bd0c25-d853-4b96-a49f-9862600c3fe9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:20:35 addons-990895 crio[679]: time="2024-07-19 18:20:35.437281673Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=48bd0c25-d853-4b96-a49f-9862600c3fe9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:20:35 addons-990895 crio[679]: time="2024-07-19 18:20:35.437686071Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0256ab93b757ea6e892b9ca172de7bada767616432e55efa7865227d9ef55316,PodSandboxId:79621fa66f4186c950d43e109b2ad001e6ef61aa114479906017e52c2e3c4399,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721413228225885450,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-m4qdd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 26248856-8eff-4f35-a658-945273146c0f,},Annotations:map[string]string{io.kubernetes.container.hash: ca3bfd36,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:006cea3b2166cbab72fbc2418ee4eec7acb50c8499e6449507dc2817f66f2491,PodSandboxId:9d3e46f75e8d2ee7a4bff0c81dd78ec719415de2d262c96bf2cdb43bd02fecde,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721413086837852985,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65ee0f77-c5d6-40a1-9307-10d20caec993,},Annotations:map[string]string{io.kubernet
es.container.hash: 1c06ed37,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9830b71c2bcaf1ace67d64ad1ffae76d66b8a540e5c1cd447c9a1a459a942813,PodSandboxId:2c3a546e0577e211fc294882112576f3a7b4c0b26d4ab7e410e77c5d57e35c61,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721413067762223752,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-ggx6p,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 3efb6210-449e-46c2-b799-ac0a331ba22f,},Annotations:map[string]string{io.kubernetes.container.hash: 8d62ed23,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea9cff6ef549c9e1c95d95faa0a5d4f6dc873251aa2962301418ecc8e1bd19c,PodSandboxId:c72e3404ed94f602c911c49e366f1edbd569db1ce4d3674d75dbe21f693e7e00,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721413049734840114,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-859fv,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 27335e38-f907-4c58-bb7d-35f271820bb0,},Annotations:map[string]string{io.kubernetes.container.hash: 19edcaa,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b13a25a266eb840848bcc2e3772733071b8d1af378e41742b75074b0905a4fd,PodSandboxId:20db14c9f5bb5da4ae8e3a59d1f540b3eff0eaac6e529c423e42629996aa3b58,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721413045235245246,Labels:map[string]s
tring{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xxz68,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ab9af6ca-4564-48e1-9ad5-e440e865f1a5,},Annotations:map[string]string{io.kubernetes.container.hash: 393951c1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28a1189b89e09e8da4575632b354554ad147075ab68dadf2da971ff80afae288,PodSandboxId:a316ff22c103c495bbb36a8c850045782d815dd0d09ce7fb76390511b9dabadc,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:172
1413030952875796,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9gshv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7d4585ad-0ffb-44ab-b1cd-6c7f7abacf7b,},Annotations:map[string]string{io.kubernetes.container.hash: 4b6ef6af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b39d84faccad88505a87e8e42b57ff41c0e79e9e4ea195ffca13d114bce32fb5,PodSandboxId:b10b41564ba7493ee0a0e9b77f804a0bcf48daee663c33c593f0b62dadff85a2,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,Cre
atedAt:1721413023779956278,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-z49l4,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: e63c3dbb-e45b-4a70-9ab4-47bf7d57ed45,},Annotations:map[string]string{io.kubernetes.container.hash: 2d11ec72,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb809f616606e0f7d078aa271390a7a6ec70c54c6577d8d118609f928b1b3ae,PodSandboxId:eb4d9b2e9b8ffd078be1fb36b070df74da442d4b785c5bdf3ac7fc50919aa2c8,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSp
ecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721413004331278419,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-mg572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b48dfb-deb0-43b5-9a04-9d6896e8c5da,},Annotations:map[string]string{io.kubernetes.container.hash: 9fb5ca14,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20f6f7da0a8f0aedaa398a9f1c9576af2b52c98755486768b6f7b7fa91023439,PodSandboxId:9cd816ea8024a13c3aa5893934c685aa986b15b09c6e74a42c26560bd08b0d8d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628
db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721412970387278736,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18afc862-2719-4e26-be45-80599ec151ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3b65558a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21cd6819933856224140625e54b2907b17f6814e345acc23a0494f0d6706c6c8,PodSandboxId:00a04b0e111a0d51e127bfad1e291a3eaf6717bc8c179590d49a818df0e558c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb
0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721412968676043223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jwrch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a132301-51a7-481d-8442-0d5c256608bd,},Annotations:map[string]string{io.kubernetes.container.hash: d388be4a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7d96760d467535606e3b0d4eadf8bea6d614327b475d5a969e3cb9f4c1
348f4,PodSandboxId:0c21da4194a4e49c502747e6666308a5d418e18de26772670ea188ec62b96d92,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721412966553102514,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p5fz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480bf6bd-8a9c-4e93-aa4e-1af321bcf057,},Annotations:map[string]string{io.kubernetes.container.hash: 21ea0dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8f8b45a1729c7e1f73c65ba85f66493fde939076da189770cd80cb76bf7f808,PodSandboxId:eb0b90522f2dfe
74884b0c77ed0c13e975fb86447d8e134b191d5ec758f4264f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721412945440718999,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-990895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fab1af5d38afbcdc021f600a1091666,},Annotations:map[string]string{io.kubernetes.container.hash: 206159ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c4e6fd25057f34a10ab26df93092ca32cd7198404478a6e199b7f5b5f55cd95,PodSandboxId:6c312e29dbb865321a981c1edad10226bc5420d26bf788a1b018dbf178c9c
8d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721412945434291705,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-990895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e35b61a8e413700b1f8556d95cc577c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fea3fc99757566ccb65c857a769b3271978f53e309dd719d714b6cbeca07cd64,PodSandboxId:3bca9b426aad91c7a27013a10362982ea66d35eec9fe54fe1e15b12198966c01,Metadata:&Con
tainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721412945425212638,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-990895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ef6e2e1e8506e1b74fb0ec198106b5c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa9964845eb57345cbac255a2cedb1452583f0a039141166cd96f55ac0bc399d,PodSandboxId:4b8769217aee60786cb08fdf3060f737a07a17a36d434dbff669668275b1bbe3,Met
adata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721412945433444277,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-990895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 225843ab613cce86ccbcf332bc4d011d,},Annotations:map[string]string{io.kubernetes.container.hash: fbb46eab,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=48bd0c25-d853-4b96-a49f-9862600c3fe9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:20:35 addons-990895 crio[679]: time="2024-07-19 18:20:35.470953074Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c4f110cd-fa44-4d55-9fe5-b7d92ca93d5f name=/runtime.v1.RuntimeService/Version
	Jul 19 18:20:35 addons-990895 crio[679]: time="2024-07-19 18:20:35.471025084Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c4f110cd-fa44-4d55-9fe5-b7d92ca93d5f name=/runtime.v1.RuntimeService/Version
	Jul 19 18:20:35 addons-990895 crio[679]: time="2024-07-19 18:20:35.472254270Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5c678b3f-be9e-4267-a882-6cb43d172bbb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 18:20:35 addons-990895 crio[679]: time="2024-07-19 18:20:35.473665880Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721413235473637836,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580634,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5c678b3f-be9e-4267-a882-6cb43d172bbb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 18:20:35 addons-990895 crio[679]: time="2024-07-19 18:20:35.474306103Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=836ab821-3443-41c5-a4c6-4b610a70e82c name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:20:35 addons-990895 crio[679]: time="2024-07-19 18:20:35.474405926Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=836ab821-3443-41c5-a4c6-4b610a70e82c name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:20:35 addons-990895 crio[679]: time="2024-07-19 18:20:35.474851245Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0256ab93b757ea6e892b9ca172de7bada767616432e55efa7865227d9ef55316,PodSandboxId:79621fa66f4186c950d43e109b2ad001e6ef61aa114479906017e52c2e3c4399,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721413228225885450,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-m4qdd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 26248856-8eff-4f35-a658-945273146c0f,},Annotations:map[string]string{io.kubernetes.container.hash: ca3bfd36,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:006cea3b2166cbab72fbc2418ee4eec7acb50c8499e6449507dc2817f66f2491,PodSandboxId:9d3e46f75e8d2ee7a4bff0c81dd78ec719415de2d262c96bf2cdb43bd02fecde,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721413086837852985,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65ee0f77-c5d6-40a1-9307-10d20caec993,},Annotations:map[string]string{io.kubernet
es.container.hash: 1c06ed37,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9830b71c2bcaf1ace67d64ad1ffae76d66b8a540e5c1cd447c9a1a459a942813,PodSandboxId:2c3a546e0577e211fc294882112576f3a7b4c0b26d4ab7e410e77c5d57e35c61,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721413067762223752,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-ggx6p,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 3efb6210-449e-46c2-b799-ac0a331ba22f,},Annotations:map[string]string{io.kubernetes.container.hash: 8d62ed23,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea9cff6ef549c9e1c95d95faa0a5d4f6dc873251aa2962301418ecc8e1bd19c,PodSandboxId:c72e3404ed94f602c911c49e366f1edbd569db1ce4d3674d75dbe21f693e7e00,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721413049734840114,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-859fv,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 27335e38-f907-4c58-bb7d-35f271820bb0,},Annotations:map[string]string{io.kubernetes.container.hash: 19edcaa,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b13a25a266eb840848bcc2e3772733071b8d1af378e41742b75074b0905a4fd,PodSandboxId:20db14c9f5bb5da4ae8e3a59d1f540b3eff0eaac6e529c423e42629996aa3b58,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721413045235245246,Labels:map[string]s
tring{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xxz68,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ab9af6ca-4564-48e1-9ad5-e440e865f1a5,},Annotations:map[string]string{io.kubernetes.container.hash: 393951c1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28a1189b89e09e8da4575632b354554ad147075ab68dadf2da971ff80afae288,PodSandboxId:a316ff22c103c495bbb36a8c850045782d815dd0d09ce7fb76390511b9dabadc,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:172
1413030952875796,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9gshv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7d4585ad-0ffb-44ab-b1cd-6c7f7abacf7b,},Annotations:map[string]string{io.kubernetes.container.hash: 4b6ef6af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b39d84faccad88505a87e8e42b57ff41c0e79e9e4ea195ffca13d114bce32fb5,PodSandboxId:b10b41564ba7493ee0a0e9b77f804a0bcf48daee663c33c593f0b62dadff85a2,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,Cre
atedAt:1721413023779956278,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-z49l4,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: e63c3dbb-e45b-4a70-9ab4-47bf7d57ed45,},Annotations:map[string]string{io.kubernetes.container.hash: 2d11ec72,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb809f616606e0f7d078aa271390a7a6ec70c54c6577d8d118609f928b1b3ae,PodSandboxId:eb4d9b2e9b8ffd078be1fb36b070df74da442d4b785c5bdf3ac7fc50919aa2c8,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSp
ecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721413004331278419,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-mg572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b48dfb-deb0-43b5-9a04-9d6896e8c5da,},Annotations:map[string]string{io.kubernetes.container.hash: 9fb5ca14,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20f6f7da0a8f0aedaa398a9f1c9576af2b52c98755486768b6f7b7fa91023439,PodSandboxId:9cd816ea8024a13c3aa5893934c685aa986b15b09c6e74a42c26560bd08b0d8d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628
db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721412970387278736,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18afc862-2719-4e26-be45-80599ec151ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3b65558a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21cd6819933856224140625e54b2907b17f6814e345acc23a0494f0d6706c6c8,PodSandboxId:00a04b0e111a0d51e127bfad1e291a3eaf6717bc8c179590d49a818df0e558c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb
0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721412968676043223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jwrch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a132301-51a7-481d-8442-0d5c256608bd,},Annotations:map[string]string{io.kubernetes.container.hash: d388be4a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7d96760d467535606e3b0d4eadf8bea6d614327b475d5a969e3cb9f4c1
348f4,PodSandboxId:0c21da4194a4e49c502747e6666308a5d418e18de26772670ea188ec62b96d92,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721412966553102514,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p5fz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480bf6bd-8a9c-4e93-aa4e-1af321bcf057,},Annotations:map[string]string{io.kubernetes.container.hash: 21ea0dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8f8b45a1729c7e1f73c65ba85f66493fde939076da189770cd80cb76bf7f808,PodSandboxId:eb0b90522f2dfe
74884b0c77ed0c13e975fb86447d8e134b191d5ec758f4264f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721412945440718999,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-990895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fab1af5d38afbcdc021f600a1091666,},Annotations:map[string]string{io.kubernetes.container.hash: 206159ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c4e6fd25057f34a10ab26df93092ca32cd7198404478a6e199b7f5b5f55cd95,PodSandboxId:6c312e29dbb865321a981c1edad10226bc5420d26bf788a1b018dbf178c9c
8d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721412945434291705,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-990895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e35b61a8e413700b1f8556d95cc577c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fea3fc99757566ccb65c857a769b3271978f53e309dd719d714b6cbeca07cd64,PodSandboxId:3bca9b426aad91c7a27013a10362982ea66d35eec9fe54fe1e15b12198966c01,Metadata:&Con
tainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721412945425212638,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-990895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ef6e2e1e8506e1b74fb0ec198106b5c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa9964845eb57345cbac255a2cedb1452583f0a039141166cd96f55ac0bc399d,PodSandboxId:4b8769217aee60786cb08fdf3060f737a07a17a36d434dbff669668275b1bbe3,Met
adata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721412945433444277,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-990895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 225843ab613cce86ccbcf332bc4d011d,},Annotations:map[string]string{io.kubernetes.container.hash: fbb46eab,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=836ab821-3443-41c5-a4c6-4b610a70e82c name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:20:35 addons-990895 crio[679]: time="2024-07-19 18:20:35.516694069Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=659348fe-14eb-4d88-b4f2-865992a743fd name=/runtime.v1.RuntimeService/Version
	Jul 19 18:20:35 addons-990895 crio[679]: time="2024-07-19 18:20:35.516771678Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=659348fe-14eb-4d88-b4f2-865992a743fd name=/runtime.v1.RuntimeService/Version
	Jul 19 18:20:35 addons-990895 crio[679]: time="2024-07-19 18:20:35.517727105Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=adcfee88-3caa-4d41-8f33-b0a90ced0c4f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 18:20:35 addons-990895 crio[679]: time="2024-07-19 18:20:35.519037924Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721413235519008217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580634,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=adcfee88-3caa-4d41-8f33-b0a90ced0c4f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 18:20:35 addons-990895 crio[679]: time="2024-07-19 18:20:35.519620846Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c05d36b1-46a2-47b3-bc82-46350e2ef631 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:20:35 addons-990895 crio[679]: time="2024-07-19 18:20:35.519678821Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c05d36b1-46a2-47b3-bc82-46350e2ef631 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:20:35 addons-990895 crio[679]: time="2024-07-19 18:20:35.520132521Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0256ab93b757ea6e892b9ca172de7bada767616432e55efa7865227d9ef55316,PodSandboxId:79621fa66f4186c950d43e109b2ad001e6ef61aa114479906017e52c2e3c4399,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721413228225885450,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-m4qdd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 26248856-8eff-4f35-a658-945273146c0f,},Annotations:map[string]string{io.kubernetes.container.hash: ca3bfd36,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:006cea3b2166cbab72fbc2418ee4eec7acb50c8499e6449507dc2817f66f2491,PodSandboxId:9d3e46f75e8d2ee7a4bff0c81dd78ec719415de2d262c96bf2cdb43bd02fecde,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721413086837852985,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65ee0f77-c5d6-40a1-9307-10d20caec993,},Annotations:map[string]string{io.kubernet
es.container.hash: 1c06ed37,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9830b71c2bcaf1ace67d64ad1ffae76d66b8a540e5c1cd447c9a1a459a942813,PodSandboxId:2c3a546e0577e211fc294882112576f3a7b4c0b26d4ab7e410e77c5d57e35c61,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721413067762223752,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-ggx6p,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 3efb6210-449e-46c2-b799-ac0a331ba22f,},Annotations:map[string]string{io.kubernetes.container.hash: 8d62ed23,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea9cff6ef549c9e1c95d95faa0a5d4f6dc873251aa2962301418ecc8e1bd19c,PodSandboxId:c72e3404ed94f602c911c49e366f1edbd569db1ce4d3674d75dbe21f693e7e00,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721413049734840114,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-859fv,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 27335e38-f907-4c58-bb7d-35f271820bb0,},Annotations:map[string]string{io.kubernetes.container.hash: 19edcaa,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b13a25a266eb840848bcc2e3772733071b8d1af378e41742b75074b0905a4fd,PodSandboxId:20db14c9f5bb5da4ae8e3a59d1f540b3eff0eaac6e529c423e42629996aa3b58,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721413045235245246,Labels:map[string]s
tring{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xxz68,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ab9af6ca-4564-48e1-9ad5-e440e865f1a5,},Annotations:map[string]string{io.kubernetes.container.hash: 393951c1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28a1189b89e09e8da4575632b354554ad147075ab68dadf2da971ff80afae288,PodSandboxId:a316ff22c103c495bbb36a8c850045782d815dd0d09ce7fb76390511b9dabadc,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:172
1413030952875796,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9gshv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7d4585ad-0ffb-44ab-b1cd-6c7f7abacf7b,},Annotations:map[string]string{io.kubernetes.container.hash: 4b6ef6af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b39d84faccad88505a87e8e42b57ff41c0e79e9e4ea195ffca13d114bce32fb5,PodSandboxId:b10b41564ba7493ee0a0e9b77f804a0bcf48daee663c33c593f0b62dadff85a2,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,Cre
atedAt:1721413023779956278,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-z49l4,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: e63c3dbb-e45b-4a70-9ab4-47bf7d57ed45,},Annotations:map[string]string{io.kubernetes.container.hash: 2d11ec72,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb809f616606e0f7d078aa271390a7a6ec70c54c6577d8d118609f928b1b3ae,PodSandboxId:eb4d9b2e9b8ffd078be1fb36b070df74da442d4b785c5bdf3ac7fc50919aa2c8,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSp
ecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721413004331278419,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-mg572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b48dfb-deb0-43b5-9a04-9d6896e8c5da,},Annotations:map[string]string{io.kubernetes.container.hash: 9fb5ca14,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20f6f7da0a8f0aedaa398a9f1c9576af2b52c98755486768b6f7b7fa91023439,PodSandboxId:9cd816ea8024a13c3aa5893934c685aa986b15b09c6e74a42c26560bd08b0d8d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628
db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721412970387278736,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18afc862-2719-4e26-be45-80599ec151ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3b65558a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21cd6819933856224140625e54b2907b17f6814e345acc23a0494f0d6706c6c8,PodSandboxId:00a04b0e111a0d51e127bfad1e291a3eaf6717bc8c179590d49a818df0e558c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb
0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721412968676043223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jwrch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a132301-51a7-481d-8442-0d5c256608bd,},Annotations:map[string]string{io.kubernetes.container.hash: d388be4a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7d96760d467535606e3b0d4eadf8bea6d614327b475d5a969e3cb9f4c1
348f4,PodSandboxId:0c21da4194a4e49c502747e6666308a5d418e18de26772670ea188ec62b96d92,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721412966553102514,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p5fz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480bf6bd-8a9c-4e93-aa4e-1af321bcf057,},Annotations:map[string]string{io.kubernetes.container.hash: 21ea0dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8f8b45a1729c7e1f73c65ba85f66493fde939076da189770cd80cb76bf7f808,PodSandboxId:eb0b90522f2dfe
74884b0c77ed0c13e975fb86447d8e134b191d5ec758f4264f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721412945440718999,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-990895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fab1af5d38afbcdc021f600a1091666,},Annotations:map[string]string{io.kubernetes.container.hash: 206159ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c4e6fd25057f34a10ab26df93092ca32cd7198404478a6e199b7f5b5f55cd95,PodSandboxId:6c312e29dbb865321a981c1edad10226bc5420d26bf788a1b018dbf178c9c
8d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721412945434291705,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-990895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e35b61a8e413700b1f8556d95cc577c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fea3fc99757566ccb65c857a769b3271978f53e309dd719d714b6cbeca07cd64,PodSandboxId:3bca9b426aad91c7a27013a10362982ea66d35eec9fe54fe1e15b12198966c01,Metadata:&Con
tainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721412945425212638,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-990895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ef6e2e1e8506e1b74fb0ec198106b5c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa9964845eb57345cbac255a2cedb1452583f0a039141166cd96f55ac0bc399d,PodSandboxId:4b8769217aee60786cb08fdf3060f737a07a17a36d434dbff669668275b1bbe3,Met
adata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721412945433444277,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-990895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 225843ab613cce86ccbcf332bc4d011d,},Annotations:map[string]string{io.kubernetes.container.hash: fbb46eab,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c05d36b1-46a2-47b3-bc82-46350e2ef631 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0256ab93b757e       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   79621fa66f418       hello-world-app-6778b5fc9f-m4qdd
	006cea3b2166c       docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55                              2 minutes ago       Running             nginx                     0                   9d3e46f75e8d2       nginx
	9830b71c2bcaf       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                        2 minutes ago       Running             headlamp                  0                   2c3a546e0577e       headlamp-7867546754-ggx6p
	aea9cff6ef549       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 3 minutes ago       Running             gcp-auth                  0                   c72e3404ed94f       gcp-auth-5db96cd9b4-859fv
	8b13a25a266eb       684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66                                                             3 minutes ago       Exited              patch                     2                   20db14c9f5bb5       ingress-nginx-admission-patch-xxz68
	28a1189b89e09       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   3 minutes ago       Exited              create                    0                   a316ff22c103c       ingress-nginx-admission-create-9gshv
	b39d84faccad8       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                              3 minutes ago       Running             yakd                      0                   b10b41564ba74       yakd-dashboard-799879c74f-z49l4
	3eb809f616606       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        3 minutes ago       Running             metrics-server            0                   eb4d9b2e9b8ff       metrics-server-c59844bb4-mg572
	20f6f7da0a8f0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   9cd816ea8024a       storage-provisioner
	21cd681993385       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             4 minutes ago       Running             coredns                   0                   00a04b0e111a0       coredns-7db6d8ff4d-jwrch
	c7d96760d4675       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                             4 minutes ago       Running             kube-proxy                0                   0c21da4194a4e       kube-proxy-7p5fz
	a8f8b45a1729c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             4 minutes ago       Running             etcd                      0                   eb0b90522f2df       etcd-addons-990895
	8c4e6fd25057f       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                             4 minutes ago       Running             kube-scheduler            0                   6c312e29dbb86       kube-scheduler-addons-990895
	fa9964845eb57       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                             4 minutes ago       Running             kube-apiserver            0                   4b8769217aee6       kube-apiserver-addons-990895
	fea3fc9975756       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                             4 minutes ago       Running             kube-controller-manager   0                   3bca9b426aad9       kube-controller-manager-addons-990895
	
	
	==> coredns [21cd6819933856224140625e54b2907b17f6814e345acc23a0494f0d6706c6c8] <==
	[INFO] 10.244.0.6:41554 - 50086 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000112725s
	[INFO] 10.244.0.6:57414 - 60528 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000100214s
	[INFO] 10.244.0.6:57414 - 26493 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000110981s
	[INFO] 10.244.0.6:39655 - 15772 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000086603s
	[INFO] 10.244.0.6:39655 - 3746 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000056977s
	[INFO] 10.244.0.6:37715 - 22527 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000147521s
	[INFO] 10.244.0.6:37715 - 5373 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000191802s
	[INFO] 10.244.0.6:40998 - 4681 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000104731s
	[INFO] 10.244.0.6:40998 - 20820 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000029699s
	[INFO] 10.244.0.6:39074 - 4806 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000062341s
	[INFO] 10.244.0.6:39074 - 37688 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000026965s
	[INFO] 10.244.0.6:54863 - 21888 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00005239s
	[INFO] 10.244.0.6:54863 - 16002 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000027191s
	[INFO] 10.244.0.6:49680 - 5507 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000038985s
	[INFO] 10.244.0.6:49680 - 32924 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000045142s
	[INFO] 10.244.0.21:47372 - 18729 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000524215s
	[INFO] 10.244.0.21:42728 - 41363 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000585279s
	[INFO] 10.244.0.21:55559 - 12452 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000161149s
	[INFO] 10.244.0.21:36444 - 26844 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000109007s
	[INFO] 10.244.0.21:42041 - 28288 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000101659s
	[INFO] 10.244.0.21:36779 - 43252 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000150629s
	[INFO] 10.244.0.21:43719 - 37933 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000367s
	[INFO] 10.244.0.21:56968 - 25142 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.000600053s
	[INFO] 10.244.0.24:47193 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00135043s
	[INFO] 10.244.0.24:33629 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000112674s
	
	
	==> describe nodes <==
	Name:               addons-990895
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-990895
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221
	                    minikube.k8s.io/name=addons-990895
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T18_15_50_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-990895
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 18:15:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-990895
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 18:20:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 18:18:55 +0000   Fri, 19 Jul 2024 18:15:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 18:18:55 +0000   Fri, 19 Jul 2024 18:15:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 18:18:55 +0000   Fri, 19 Jul 2024 18:15:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 18:18:55 +0000   Fri, 19 Jul 2024 18:15:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.239
	  Hostname:    addons-990895
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 4f93e701e37e4a1d91833568e7381528
	  System UUID:                4f93e701-e37e-4a1d-9183-3568e7381528
	  Boot ID:                    375557a4-c3ff-4528-aec2-4e8e0e5a58cf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-6778b5fc9f-m4qdd         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m33s
	  gcp-auth                    gcp-auth-5db96cd9b4-859fv                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  headlamp                    headlamp-7867546754-ggx6p                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m55s
	  kube-system                 coredns-7db6d8ff4d-jwrch                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m31s
	  kube-system                 etcd-addons-990895                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m45s
	  kube-system                 kube-apiserver-addons-990895             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m45s
	  kube-system                 kube-controller-manager-addons-990895    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m45s
	  kube-system                 kube-proxy-7p5fz                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kube-system                 kube-scheduler-addons-990895             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m45s
	  kube-system                 metrics-server-c59844bb4-mg572           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         4m26s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  yakd-dashboard              yakd-dashboard-799879c74f-z49l4          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m25s  kube-proxy       
	  Normal  Starting                 4m45s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m45s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m45s  kubelet          Node addons-990895 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m45s  kubelet          Node addons-990895 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m45s  kubelet          Node addons-990895 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m44s  kubelet          Node addons-990895 status is now: NodeReady
	  Normal  RegisteredNode           4m31s  node-controller  Node addons-990895 event: Registered Node addons-990895 in Controller
	
	
	==> dmesg <==
	[  +6.866741] kauditd_printk_skb: 18 callbacks suppressed
	[Jul19 18:16] systemd-fstab-generator[1489]: Ignoring "noauto" option for root device
	[  +5.170765] kauditd_printk_skb: 84 callbacks suppressed
	[  +5.206798] kauditd_printk_skb: 164 callbacks suppressed
	[ +10.802979] kauditd_printk_skb: 46 callbacks suppressed
	[ +16.607031] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.583469] kauditd_printk_skb: 8 callbacks suppressed
	[Jul19 18:17] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.362704] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.104312] kauditd_printk_skb: 63 callbacks suppressed
	[  +5.770955] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.004650] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.029535] kauditd_printk_skb: 16 callbacks suppressed
	[  +6.307196] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.528522] kauditd_printk_skb: 42 callbacks suppressed
	[  +5.894844] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.053126] kauditd_printk_skb: 36 callbacks suppressed
	[  +5.463056] kauditd_printk_skb: 39 callbacks suppressed
	[Jul19 18:18] kauditd_printk_skb: 29 callbacks suppressed
	[ +19.949442] kauditd_printk_skb: 2 callbacks suppressed
	[  +7.059099] kauditd_printk_skb: 3 callbacks suppressed
	[ +19.126366] kauditd_printk_skb: 7 callbacks suppressed
	[Jul19 18:19] kauditd_printk_skb: 33 callbacks suppressed
	[Jul19 18:20] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.151290] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [a8f8b45a1729c7e1f73c65ba85f66493fde939076da189770cd80cb76bf7f808] <==
	{"level":"warn","ts":"2024-07-19T18:17:26.431073Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T18:17:26.010081Z","time spent":"420.987177ms","remote":"127.0.0.1:47686","response type":"/etcdserverpb.KV/Range","request count":0,"request size":66,"response count":1,"response size":4619,"request content":"key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-xxz68\" "}
	{"level":"info","ts":"2024-07-19T18:17:26.43064Z","caller":"traceutil/trace.go:171","msg":"trace[1604364992] linearizableReadLoop","detail":"{readStateIndex:1157; appliedIndex:1156; }","duration":"419.954168ms","start":"2024-07-19T18:17:26.010117Z","end":"2024-07-19T18:17:26.430071Z","steps":["trace[1604364992] 'read index received'  (duration: 413.28645ms)","trace[1604364992] 'applied index is now lower than readState.Index'  (duration: 6.666969ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T18:17:26.432734Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"395.433021ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"info","ts":"2024-07-19T18:17:26.4328Z","caller":"traceutil/trace.go:171","msg":"trace[1971354546] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1125; }","duration":"395.525265ms","start":"2024-07-19T18:17:26.037266Z","end":"2024-07-19T18:17:26.432791Z","steps":["trace[1971354546] 'agreement among raft nodes before linearized reading'  (duration: 395.352742ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T18:17:26.432844Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T18:17:26.037251Z","time spent":"395.584389ms","remote":"127.0.0.1:47686","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":11476,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"info","ts":"2024-07-19T18:17:31.230495Z","caller":"traceutil/trace.go:171","msg":"trace[1378868475] transaction","detail":"{read_only:false; response_revision:1161; number_of_response:1; }","duration":"145.171608ms","start":"2024-07-19T18:17:31.0853Z","end":"2024-07-19T18:17:31.230471Z","steps":["trace[1378868475] 'process raft request'  (duration: 144.821113ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T18:17:47.654447Z","caller":"traceutil/trace.go:171","msg":"trace[46878725] linearizableReadLoop","detail":"{readStateIndex:1351; appliedIndex:1350; }","duration":"362.054947ms","start":"2024-07-19T18:17:47.292375Z","end":"2024-07-19T18:17:47.65443Z","steps":["trace[46878725] 'read index received'  (duration: 361.770592ms)","trace[46878725] 'applied index is now lower than readState.Index'  (duration: 283.682µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T18:17:47.654539Z","caller":"traceutil/trace.go:171","msg":"trace[1664972505] transaction","detail":"{read_only:false; response_revision:1311; number_of_response:1; }","duration":"401.863233ms","start":"2024-07-19T18:17:47.252668Z","end":"2024-07-19T18:17:47.654532Z","steps":["trace[1664972505] 'process raft request'  (duration: 401.516738ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T18:17:47.654716Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T18:17:47.252656Z","time spent":"401.901611ms","remote":"127.0.0.1:47740","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":537,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1265 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:450 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"warn","ts":"2024-07-19T18:17:47.654861Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"362.496699ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-07-19T18:17:47.654898Z","caller":"traceutil/trace.go:171","msg":"trace[770314431] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1311; }","duration":"362.599042ms","start":"2024-07-19T18:17:47.292291Z","end":"2024-07-19T18:17:47.65489Z","steps":["trace[770314431] 'agreement among raft nodes before linearized reading'  (duration: 362.507874ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T18:17:47.654919Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T18:17:47.292276Z","time spent":"362.636437ms","remote":"127.0.0.1:47672","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1136,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2024-07-19T18:17:47.655071Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"351.449591ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3966"}
	{"level":"info","ts":"2024-07-19T18:17:47.655102Z","caller":"traceutil/trace.go:171","msg":"trace[683584421] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1311; }","duration":"351.499102ms","start":"2024-07-19T18:17:47.303597Z","end":"2024-07-19T18:17:47.655096Z","steps":["trace[683584421] 'agreement among raft nodes before linearized reading'  (duration: 351.427148ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T18:17:47.655128Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T18:17:47.303583Z","time spent":"351.534179ms","remote":"127.0.0.1:47686","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":1,"response size":3989,"request content":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" "}
	{"level":"info","ts":"2024-07-19T18:18:26.206107Z","caller":"traceutil/trace.go:171","msg":"trace[1475274680] linearizableReadLoop","detail":"{readStateIndex:1587; appliedIndex:1586; }","duration":"334.91608ms","start":"2024-07-19T18:18:25.871155Z","end":"2024-07-19T18:18:26.206071Z","steps":["trace[1475274680] 'read index received'  (duration: 334.711855ms)","trace[1475274680] 'applied index is now lower than readState.Index'  (duration: 203.431µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T18:18:26.206283Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"335.106455ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-07-19T18:18:26.20628Z","caller":"traceutil/trace.go:171","msg":"trace[1277275665] transaction","detail":"{read_only:false; response_revision:1534; number_of_response:1; }","duration":"385.997061ms","start":"2024-07-19T18:18:25.820263Z","end":"2024-07-19T18:18:26.20626Z","steps":["trace[1277275665] 'process raft request'  (duration: 385.621381ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T18:18:26.20631Z","caller":"traceutil/trace.go:171","msg":"trace[4348081] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1534; }","duration":"335.175699ms","start":"2024-07-19T18:18:25.871126Z","end":"2024-07-19T18:18:26.206302Z","steps":["trace[4348081] 'agreement among raft nodes before linearized reading'  (duration: 335.071357ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T18:18:26.206371Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T18:18:25.871107Z","time spent":"335.257667ms","remote":"127.0.0.1:47672","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1136,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2024-07-19T18:18:26.206428Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T18:18:25.820251Z","time spent":"386.062518ms","remote":"127.0.0.1:47740","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1526 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"info","ts":"2024-07-19T18:18:58.598037Z","caller":"traceutil/trace.go:171","msg":"trace[1931559067] linearizableReadLoop","detail":"{readStateIndex:1787; appliedIndex:1786; }","duration":"145.883721ms","start":"2024-07-19T18:18:58.452125Z","end":"2024-07-19T18:18:58.598009Z","steps":["trace[1931559067] 'read index received'  (duration: 145.692277ms)","trace[1931559067] 'applied index is now lower than readState.Index'  (duration: 190.494µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T18:18:58.598264Z","caller":"traceutil/trace.go:171","msg":"trace[918694199] transaction","detail":"{read_only:false; response_revision:1722; number_of_response:1; }","duration":"234.747826ms","start":"2024-07-19T18:18:58.363501Z","end":"2024-07-19T18:18:58.598249Z","steps":["trace[918694199] 'process raft request'  (duration: 234.371754ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T18:18:58.598435Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.161607ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/external-snapshotter-runner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-19T18:18:58.598494Z","caller":"traceutil/trace.go:171","msg":"trace[660000056] range","detail":"{range_begin:/registry/clusterroles/external-snapshotter-runner; range_end:; response_count:0; response_revision:1722; }","duration":"146.381682ms","start":"2024-07-19T18:18:58.4521Z","end":"2024-07-19T18:18:58.598482Z","steps":["trace[660000056] 'agreement among raft nodes before linearized reading'  (duration: 146.157251ms)"],"step_count":1}
	
	
	==> gcp-auth [aea9cff6ef549c9e1c95d95faa0a5d4f6dc873251aa2962301418ecc8e1bd19c] <==
	2024/07/19 18:17:29 GCP Auth Webhook started!
	2024/07/19 18:17:38 Ready to marshal response ...
	2024/07/19 18:17:38 Ready to write response ...
	2024/07/19 18:17:38 Ready to marshal response ...
	2024/07/19 18:17:38 Ready to write response ...
	2024/07/19 18:17:40 Ready to marshal response ...
	2024/07/19 18:17:40 Ready to write response ...
	2024/07/19 18:17:40 Ready to marshal response ...
	2024/07/19 18:17:40 Ready to write response ...
	2024/07/19 18:17:40 Ready to marshal response ...
	2024/07/19 18:17:40 Ready to write response ...
	2024/07/19 18:17:43 Ready to marshal response ...
	2024/07/19 18:17:43 Ready to write response ...
	2024/07/19 18:17:55 Ready to marshal response ...
	2024/07/19 18:17:55 Ready to write response ...
	2024/07/19 18:17:56 Ready to marshal response ...
	2024/07/19 18:17:56 Ready to write response ...
	2024/07/19 18:18:02 Ready to marshal response ...
	2024/07/19 18:18:02 Ready to write response ...
	2024/07/19 18:18:20 Ready to marshal response ...
	2024/07/19 18:18:20 Ready to write response ...
	2024/07/19 18:18:45 Ready to marshal response ...
	2024/07/19 18:18:45 Ready to write response ...
	2024/07/19 18:20:25 Ready to marshal response ...
	2024/07/19 18:20:25 Ready to write response ...
	
	
	==> kernel <==
	 18:20:35 up 5 min,  0 users,  load average: 0.45, 0.99, 0.51
	Linux addons-990895 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [fa9964845eb57345cbac255a2cedb1452583f0a039141166cd96f55ac0bc399d] <==
	E0719 18:17:49.611616       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0719 18:17:49.611942       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.63.79:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.63.79:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.63.79:443: connect: connection refused
	E0719 18:17:49.617452       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.63.79:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.63.79:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.63.79:443: connect: connection refused
	E0719 18:17:49.638144       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.63.79:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.63.79:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.63.79:443: connect: connection refused
	E0719 18:17:49.679826       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.63.79:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.63.79:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.63.79:443: connect: connection refused
	I0719 18:17:49.804983       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0719 18:18:01.915210       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0719 18:18:02.109760       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.44.136"}
	E0719 18:18:12.273573       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0719 18:18:32.636156       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0719 18:19:01.024666       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 18:19:01.027742       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 18:19:01.053155       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 18:19:01.053194       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 18:19:01.061914       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 18:19:01.061969       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 18:19:01.077006       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 18:19:01.077058       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 18:19:01.096677       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 18:19:01.096722       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0719 18:19:02.062499       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0719 18:19:02.096986       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0719 18:19:02.107524       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0719 18:20:25.530461       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.38.15"}
	
	
	==> kube-controller-manager [fea3fc99757566ccb65c857a769b3271978f53e309dd719d714b6cbeca07cd64] <==
	W0719 18:19:24.921498       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 18:19:24.921649       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 18:19:33.690986       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 18:19:33.691049       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 18:19:38.801631       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 18:19:38.801790       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 18:19:41.275531       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 18:19:41.275582       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 18:20:04.260309       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 18:20:04.260488       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 18:20:05.263350       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 18:20:05.263389       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 18:20:15.475910       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 18:20:15.475962       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0719 18:20:25.398881       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="45.644182ms"
	I0719 18:20:25.413049       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="14.045461ms"
	I0719 18:20:25.436877       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="23.773822ms"
	I0719 18:20:25.437117       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="78.43µs"
	I0719 18:20:27.651996       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0719 18:20:27.655388       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6d9bd977d4" duration="4.448µs"
	I0719 18:20:27.661905       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0719 18:20:29.003821       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="9.146392ms"
	I0719 18:20:29.004289       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="87.629µs"
	W0719 18:20:30.010636       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 18:20:30.010691       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [c7d96760d467535606e3b0d4eadf8bea6d614327b475d5a969e3cb9f4c1348f4] <==
	I0719 18:16:08.959991       1 server_linux.go:69] "Using iptables proxy"
	I0719 18:16:09.169414       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.239"]
	I0719 18:16:09.975251       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 18:16:09.975313       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 18:16:09.986136       1 server_linux.go:165] "Using iptables Proxier"
	I0719 18:16:10.037635       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 18:16:10.037868       1 server.go:872] "Version info" version="v1.30.3"
	I0719 18:16:10.037882       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 18:16:10.068283       1 config.go:192] "Starting service config controller"
	I0719 18:16:10.068300       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 18:16:10.068394       1 config.go:101] "Starting endpoint slice config controller"
	I0719 18:16:10.068399       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 18:16:10.069051       1 config.go:319] "Starting node config controller"
	I0719 18:16:10.069060       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 18:16:10.170978       1 shared_informer.go:320] Caches are synced for node config
	I0719 18:16:10.171019       1 shared_informer.go:320] Caches are synced for service config
	I0719 18:16:10.171049       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [8c4e6fd25057f34a10ab26df93092ca32cd7198404478a6e199b7f5b5f55cd95] <==
	W0719 18:15:47.912471       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 18:15:47.912480       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0719 18:15:47.914646       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0719 18:15:47.914684       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0719 18:15:47.914726       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 18:15:47.914750       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0719 18:15:47.917769       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0719 18:15:47.917811       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0719 18:15:47.918042       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 18:15:47.923061       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 18:15:47.918114       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0719 18:15:47.923119       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0719 18:15:47.918172       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 18:15:47.923169       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0719 18:15:47.918208       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0719 18:15:47.923197       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0719 18:15:48.787680       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 18:15:48.787808       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 18:15:48.915781       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0719 18:15:48.915825       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0719 18:15:48.944943       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0719 18:15:48.944988       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0719 18:15:49.167608       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 18:15:49.167652       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0719 18:15:52.201529       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 19 18:20:25 addons-990895 kubelet[1270]: I0719 18:20:25.404790    1270 memory_manager.go:354] "RemoveStaleState removing state" podUID="d10c0530-0314-425d-a850-6077fe481809" containerName="liveness-probe"
	Jul 19 18:20:25 addons-990895 kubelet[1270]: I0719 18:20:25.442944    1270 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j77mk\" (UniqueName: \"kubernetes.io/projected/26248856-8eff-4f35-a658-945273146c0f-kube-api-access-j77mk\") pod \"hello-world-app-6778b5fc9f-m4qdd\" (UID: \"26248856-8eff-4f35-a658-945273146c0f\") " pod="default/hello-world-app-6778b5fc9f-m4qdd"
	Jul 19 18:20:25 addons-990895 kubelet[1270]: I0719 18:20:25.443002    1270 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/26248856-8eff-4f35-a658-945273146c0f-gcp-creds\") pod \"hello-world-app-6778b5fc9f-m4qdd\" (UID: \"26248856-8eff-4f35-a658-945273146c0f\") " pod="default/hello-world-app-6778b5fc9f-m4qdd"
	Jul 19 18:20:26 addons-990895 kubelet[1270]: I0719 18:20:26.452294    1270 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vxq9\" (UniqueName: \"kubernetes.io/projected/396f1196-8e67-41ea-b4d4-86b6c843c63e-kube-api-access-5vxq9\") pod \"396f1196-8e67-41ea-b4d4-86b6c843c63e\" (UID: \"396f1196-8e67-41ea-b4d4-86b6c843c63e\") "
	Jul 19 18:20:26 addons-990895 kubelet[1270]: I0719 18:20:26.454252    1270 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/396f1196-8e67-41ea-b4d4-86b6c843c63e-kube-api-access-5vxq9" (OuterVolumeSpecName: "kube-api-access-5vxq9") pod "396f1196-8e67-41ea-b4d4-86b6c843c63e" (UID: "396f1196-8e67-41ea-b4d4-86b6c843c63e"). InnerVolumeSpecName "kube-api-access-5vxq9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 19 18:20:26 addons-990895 kubelet[1270]: I0719 18:20:26.553154    1270 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-5vxq9\" (UniqueName: \"kubernetes.io/projected/396f1196-8e67-41ea-b4d4-86b6c843c63e-kube-api-access-5vxq9\") on node \"addons-990895\" DevicePath \"\""
	Jul 19 18:20:26 addons-990895 kubelet[1270]: I0719 18:20:26.969027    1270 scope.go:117] "RemoveContainer" containerID="2c11858c08afc53dd417c5aafa4b3483484a6ad75f19c70c60ccce66e7059092"
	Jul 19 18:20:27 addons-990895 kubelet[1270]: I0719 18:20:27.003102    1270 scope.go:117] "RemoveContainer" containerID="2c11858c08afc53dd417c5aafa4b3483484a6ad75f19c70c60ccce66e7059092"
	Jul 19 18:20:27 addons-990895 kubelet[1270]: E0719 18:20:27.003921    1270 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c11858c08afc53dd417c5aafa4b3483484a6ad75f19c70c60ccce66e7059092\": container with ID starting with 2c11858c08afc53dd417c5aafa4b3483484a6ad75f19c70c60ccce66e7059092 not found: ID does not exist" containerID="2c11858c08afc53dd417c5aafa4b3483484a6ad75f19c70c60ccce66e7059092"
	Jul 19 18:20:27 addons-990895 kubelet[1270]: I0719 18:20:27.003968    1270 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c11858c08afc53dd417c5aafa4b3483484a6ad75f19c70c60ccce66e7059092"} err="failed to get container status \"2c11858c08afc53dd417c5aafa4b3483484a6ad75f19c70c60ccce66e7059092\": rpc error: code = NotFound desc = could not find container \"2c11858c08afc53dd417c5aafa4b3483484a6ad75f19c70c60ccce66e7059092\": container with ID starting with 2c11858c08afc53dd417c5aafa4b3483484a6ad75f19c70c60ccce66e7059092 not found: ID does not exist"
	Jul 19 18:20:28 addons-990895 kubelet[1270]: I0719 18:20:28.222194    1270 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="396f1196-8e67-41ea-b4d4-86b6c843c63e" path="/var/lib/kubelet/pods/396f1196-8e67-41ea-b4d4-86b6c843c63e/volumes"
	Jul 19 18:20:28 addons-990895 kubelet[1270]: I0719 18:20:28.223006    1270 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d4585ad-0ffb-44ab-b1cd-6c7f7abacf7b" path="/var/lib/kubelet/pods/7d4585ad-0ffb-44ab-b1cd-6c7f7abacf7b/volumes"
	Jul 19 18:20:28 addons-990895 kubelet[1270]: I0719 18:20:28.223757    1270 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab9af6ca-4564-48e1-9ad5-e440e865f1a5" path="/var/lib/kubelet/pods/ab9af6ca-4564-48e1-9ad5-e440e865f1a5/volumes"
	Jul 19 18:20:28 addons-990895 kubelet[1270]: I0719 18:20:28.993443    1270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-6778b5fc9f-m4qdd" podStartSLOduration=1.7633904729999998 podStartE2EDuration="3.99342144s" podCreationTimestamp="2024-07-19 18:20:25 +0000 UTC" firstStartedPulling="2024-07-19 18:20:25.984062806 +0000 UTC m=+275.873880316" lastFinishedPulling="2024-07-19 18:20:28.214093783 +0000 UTC m=+278.103911283" observedRunningTime="2024-07-19 18:20:28.992753304 +0000 UTC m=+278.882570823" watchObservedRunningTime="2024-07-19 18:20:28.99342144 +0000 UTC m=+278.883238959"
	Jul 19 18:20:30 addons-990895 kubelet[1270]: I0719 18:20:30.882473    1270 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/46cb4a92-9a12-4ef8-8f9e-81f71fb2409b-webhook-cert\") pod \"46cb4a92-9a12-4ef8-8f9e-81f71fb2409b\" (UID: \"46cb4a92-9a12-4ef8-8f9e-81f71fb2409b\") "
	Jul 19 18:20:30 addons-990895 kubelet[1270]: I0719 18:20:30.882523    1270 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdftw\" (UniqueName: \"kubernetes.io/projected/46cb4a92-9a12-4ef8-8f9e-81f71fb2409b-kube-api-access-zdftw\") pod \"46cb4a92-9a12-4ef8-8f9e-81f71fb2409b\" (UID: \"46cb4a92-9a12-4ef8-8f9e-81f71fb2409b\") "
	Jul 19 18:20:30 addons-990895 kubelet[1270]: I0719 18:20:30.884389    1270 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46cb4a92-9a12-4ef8-8f9e-81f71fb2409b-kube-api-access-zdftw" (OuterVolumeSpecName: "kube-api-access-zdftw") pod "46cb4a92-9a12-4ef8-8f9e-81f71fb2409b" (UID: "46cb4a92-9a12-4ef8-8f9e-81f71fb2409b"). InnerVolumeSpecName "kube-api-access-zdftw". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 19 18:20:30 addons-990895 kubelet[1270]: I0719 18:20:30.889477    1270 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46cb4a92-9a12-4ef8-8f9e-81f71fb2409b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "46cb4a92-9a12-4ef8-8f9e-81f71fb2409b" (UID: "46cb4a92-9a12-4ef8-8f9e-81f71fb2409b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 19 18:20:30 addons-990895 kubelet[1270]: I0719 18:20:30.983538    1270 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/46cb4a92-9a12-4ef8-8f9e-81f71fb2409b-webhook-cert\") on node \"addons-990895\" DevicePath \"\""
	Jul 19 18:20:30 addons-990895 kubelet[1270]: I0719 18:20:30.983594    1270 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-zdftw\" (UniqueName: \"kubernetes.io/projected/46cb4a92-9a12-4ef8-8f9e-81f71fb2409b-kube-api-access-zdftw\") on node \"addons-990895\" DevicePath \"\""
	Jul 19 18:20:30 addons-990895 kubelet[1270]: I0719 18:20:30.993482    1270 scope.go:117] "RemoveContainer" containerID="685d5d12d9f38938443fdb4dcc3696c39a0ad35ad8383ae6734bbf35ece92558"
	Jul 19 18:20:31 addons-990895 kubelet[1270]: I0719 18:20:31.015740    1270 scope.go:117] "RemoveContainer" containerID="685d5d12d9f38938443fdb4dcc3696c39a0ad35ad8383ae6734bbf35ece92558"
	Jul 19 18:20:31 addons-990895 kubelet[1270]: E0719 18:20:31.016180    1270 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"685d5d12d9f38938443fdb4dcc3696c39a0ad35ad8383ae6734bbf35ece92558\": container with ID starting with 685d5d12d9f38938443fdb4dcc3696c39a0ad35ad8383ae6734bbf35ece92558 not found: ID does not exist" containerID="685d5d12d9f38938443fdb4dcc3696c39a0ad35ad8383ae6734bbf35ece92558"
	Jul 19 18:20:31 addons-990895 kubelet[1270]: I0719 18:20:31.016298    1270 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"685d5d12d9f38938443fdb4dcc3696c39a0ad35ad8383ae6734bbf35ece92558"} err="failed to get container status \"685d5d12d9f38938443fdb4dcc3696c39a0ad35ad8383ae6734bbf35ece92558\": rpc error: code = NotFound desc = could not find container \"685d5d12d9f38938443fdb4dcc3696c39a0ad35ad8383ae6734bbf35ece92558\": container with ID starting with 685d5d12d9f38938443fdb4dcc3696c39a0ad35ad8383ae6734bbf35ece92558 not found: ID does not exist"
	Jul 19 18:20:32 addons-990895 kubelet[1270]: I0719 18:20:32.221854    1270 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46cb4a92-9a12-4ef8-8f9e-81f71fb2409b" path="/var/lib/kubelet/pods/46cb4a92-9a12-4ef8-8f9e-81f71fb2409b/volumes"
	
	
	==> storage-provisioner [20f6f7da0a8f0aedaa398a9f1c9576af2b52c98755486768b6f7b7fa91023439] <==
	I0719 18:16:11.148205       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 18:16:11.279460       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 18:16:11.279576       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0719 18:16:11.369147       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0719 18:16:11.389561       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-990895_636eaf15-a65e-4f5f-a735-33ad948d0fbe!
	I0719 18:16:11.371130       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"21af0ea0-5ada-4f86-9f13-08f4782e4c03", APIVersion:"v1", ResourceVersion:"637", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-990895_636eaf15-a65e-4f5f-a735-33ad948d0fbe became leader
	I0719 18:16:11.490436       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-990895_636eaf15-a65e-4f5f-a735-33ad948d0fbe!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-990895 -n addons-990895
helpers_test.go:261: (dbg) Run:  kubectl --context addons-990895 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (154.91s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (315.01s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.481636ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-mg572" [15b48dfb-deb0-43b5-9a04-9d6896e8c5da] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00428451s
addons_test.go:417: (dbg) Run:  kubectl --context addons-990895 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-990895 top pods -n kube-system: exit status 1 (72.125638ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/etcd-addons-990895, age: 2m8.635897709s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-990895 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-990895 top pods -n kube-system: exit status 1 (72.200892ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/etcd-addons-990895, age: 2m11.666217101s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-990895 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-990895 top pods -n kube-system: exit status 1 (65.123139ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-jwrch, age: 2m2.976891002s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-990895 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-990895 top pods -n kube-system: exit status 1 (61.057188ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-jwrch, age: 2m12.771053722s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-990895 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-990895 top pods -n kube-system: exit status 1 (82.393877ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-jwrch, age: 2m23.382795828s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-990895 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-990895 top pods -n kube-system: exit status 1 (64.978477ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-jwrch, age: 2m41.424297104s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-990895 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-990895 top pods -n kube-system: exit status 1 (59.368778ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-jwrch, age: 2m58.201963855s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-990895 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-990895 top pods -n kube-system: exit status 1 (63.547888ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-jwrch, age: 3m31.779337444s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-990895 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-990895 top pods -n kube-system: exit status 1 (64.275818ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-jwrch, age: 4m11.546619156s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-990895 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-990895 top pods -n kube-system: exit status 1 (63.452857ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-jwrch, age: 5m35.498402445s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-990895 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-990895 top pods -n kube-system: exit status 1 (61.539034ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-jwrch, age: 7m0.963142013s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-990895 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-990895 -n addons-990895
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-990895 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-990895 logs -n 25: (1.242647863s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-111401                                                                     | download-only-111401 | jenkins | v1.33.1 | 19 Jul 24 18:15 UTC | 19 Jul 24 18:15 UTC |
	| delete  | -p download-only-928056                                                                     | download-only-928056 | jenkins | v1.33.1 | 19 Jul 24 18:15 UTC | 19 Jul 24 18:15 UTC |
	| delete  | -p download-only-106215                                                                     | download-only-106215 | jenkins | v1.33.1 | 19 Jul 24 18:15 UTC | 19 Jul 24 18:15 UTC |
	| delete  | -p download-only-111401                                                                     | download-only-111401 | jenkins | v1.33.1 | 19 Jul 24 18:15 UTC | 19 Jul 24 18:15 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-679369 | jenkins | v1.33.1 | 19 Jul 24 18:15 UTC |                     |
	|         | binary-mirror-679369                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:37635                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-679369                                                                     | binary-mirror-679369 | jenkins | v1.33.1 | 19 Jul 24 18:15 UTC | 19 Jul 24 18:15 UTC |
	| addons  | enable dashboard -p                                                                         | addons-990895        | jenkins | v1.33.1 | 19 Jul 24 18:15 UTC |                     |
	|         | addons-990895                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-990895        | jenkins | v1.33.1 | 19 Jul 24 18:15 UTC |                     |
	|         | addons-990895                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-990895 --wait=true                                                                | addons-990895        | jenkins | v1.33.1 | 19 Jul 24 18:15 UTC | 19 Jul 24 18:17 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-990895        | jenkins | v1.33.1 | 19 Jul 24 18:17 UTC | 19 Jul 24 18:17 UTC |
	|         | -p addons-990895                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-990895        | jenkins | v1.33.1 | 19 Jul 24 18:17 UTC | 19 Jul 24 18:17 UTC |
	|         | addons-990895                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-990895        | jenkins | v1.33.1 | 19 Jul 24 18:17 UTC | 19 Jul 24 18:17 UTC |
	|         | -p addons-990895                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-990895        | jenkins | v1.33.1 | 19 Jul 24 18:17 UTC | 19 Jul 24 18:17 UTC |
	|         | addons-990895                                                                               |                      |         |         |                     |                     |
	| ip      | addons-990895 ip                                                                            | addons-990895        | jenkins | v1.33.1 | 19 Jul 24 18:17 UTC | 19 Jul 24 18:17 UTC |
	| addons  | addons-990895 addons disable                                                                | addons-990895        | jenkins | v1.33.1 | 19 Jul 24 18:17 UTC | 19 Jul 24 18:17 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-990895 ssh cat                                                                       | addons-990895        | jenkins | v1.33.1 | 19 Jul 24 18:17 UTC | 19 Jul 24 18:17 UTC |
	|         | /opt/local-path-provisioner/pvc-31780053-8619-4414-948c-908236bd874e_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-990895 addons disable                                                                | addons-990895        | jenkins | v1.33.1 | 19 Jul 24 18:17 UTC | 19 Jul 24 18:18 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-990895 addons disable                                                                | addons-990895        | jenkins | v1.33.1 | 19 Jul 24 18:18 UTC | 19 Jul 24 18:18 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-990895 ssh curl -s                                                                   | addons-990895        | jenkins | v1.33.1 | 19 Jul 24 18:18 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-990895 addons                                                                        | addons-990895        | jenkins | v1.33.1 | 19 Jul 24 18:18 UTC | 19 Jul 24 18:19 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-990895 addons                                                                        | addons-990895        | jenkins | v1.33.1 | 19 Jul 24 18:19 UTC | 19 Jul 24 18:19 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-990895 ip                                                                            | addons-990895        | jenkins | v1.33.1 | 19 Jul 24 18:20 UTC | 19 Jul 24 18:20 UTC |
	| addons  | addons-990895 addons disable                                                                | addons-990895        | jenkins | v1.33.1 | 19 Jul 24 18:20 UTC | 19 Jul 24 18:20 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-990895 addons disable                                                                | addons-990895        | jenkins | v1.33.1 | 19 Jul 24 18:20 UTC | 19 Jul 24 18:20 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-990895 addons                                                                        | addons-990895        | jenkins | v1.33.1 | 19 Jul 24 18:23 UTC | 19 Jul 24 18:23 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 18:15:12
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 18:15:12.655117   23130 out.go:291] Setting OutFile to fd 1 ...
	I0719 18:15:12.655349   23130 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:15:12.655357   23130 out.go:304] Setting ErrFile to fd 2...
	I0719 18:15:12.655362   23130 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:15:12.655529   23130 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 18:15:12.656112   23130 out.go:298] Setting JSON to false
	I0719 18:15:12.656903   23130 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3456,"bootTime":1721409457,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 18:15:12.656956   23130 start.go:139] virtualization: kvm guest
	I0719 18:15:12.659320   23130 out.go:177] * [addons-990895] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 18:15:12.660799   23130 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 18:15:12.660802   23130 notify.go:220] Checking for updates...
	I0719 18:15:12.662279   23130 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 18:15:12.663778   23130 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 18:15:12.664953   23130 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 18:15:12.666090   23130 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 18:15:12.667225   23130 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 18:15:12.668541   23130 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 18:15:12.699022   23130 out.go:177] * Using the kvm2 driver based on user configuration
	I0719 18:15:12.700219   23130 start.go:297] selected driver: kvm2
	I0719 18:15:12.700237   23130 start.go:901] validating driver "kvm2" against <nil>
	I0719 18:15:12.700248   23130 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 18:15:12.700929   23130 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 18:15:12.700989   23130 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19307-14841/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 18:15:12.715208   23130 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 18:15:12.715252   23130 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 18:15:12.715471   23130 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 18:15:12.715500   23130 cni.go:84] Creating CNI manager for ""
	I0719 18:15:12.715507   23130 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 18:15:12.715517   23130 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 18:15:12.715569   23130 start.go:340] cluster config:
	{Name:addons-990895 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-990895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 18:15:12.715660   23130 iso.go:125] acquiring lock: {Name:mka126a3710ab8a115eb2a3299b735524430e5f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 18:15:12.718125   23130 out.go:177] * Starting "addons-990895" primary control-plane node in "addons-990895" cluster
	I0719 18:15:12.719313   23130 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 18:15:12.719352   23130 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 18:15:12.719359   23130 cache.go:56] Caching tarball of preloaded images
	I0719 18:15:12.719441   23130 preload.go:172] Found /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 18:15:12.719451   23130 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 18:15:12.719725   23130 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/config.json ...
	I0719 18:15:12.719743   23130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/config.json: {Name:mkcd6448880e86d1c1f3f5ac7ee8b02df9aea5b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:15:12.719899   23130 start.go:360] acquireMachinesLock for addons-990895: {Name:mk45aeee801587237bb965fa260501e9fac6f233 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 18:15:12.719950   23130 start.go:364] duration metric: took 36µs to acquireMachinesLock for "addons-990895"
	I0719 18:15:12.719968   23130 start.go:93] Provisioning new machine with config: &{Name:addons-990895 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-990895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 18:15:12.720020   23130 start.go:125] createHost starting for "" (driver="kvm2")
	I0719 18:15:12.722121   23130 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0719 18:15:12.722228   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:15:12.722260   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:15:12.736035   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39613
	I0719 18:15:12.736509   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:15:12.737112   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:15:12.737135   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:15:12.737507   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:15:12.737676   23130 main.go:141] libmachine: (addons-990895) Calling .GetMachineName
	I0719 18:15:12.737824   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:15:12.737972   23130 start.go:159] libmachine.API.Create for "addons-990895" (driver="kvm2")
	I0719 18:15:12.737997   23130 client.go:168] LocalClient.Create starting
	I0719 18:15:12.738029   23130 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem
	I0719 18:15:12.803063   23130 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem
	I0719 18:15:12.928516   23130 main.go:141] libmachine: Running pre-create checks...
	I0719 18:15:12.928536   23130 main.go:141] libmachine: (addons-990895) Calling .PreCreateCheck
	I0719 18:15:12.929040   23130 main.go:141] libmachine: (addons-990895) Calling .GetConfigRaw
	I0719 18:15:12.929471   23130 main.go:141] libmachine: Creating machine...
	I0719 18:15:12.929484   23130 main.go:141] libmachine: (addons-990895) Calling .Create
	I0719 18:15:12.929595   23130 main.go:141] libmachine: (addons-990895) Creating KVM machine...
	I0719 18:15:12.930906   23130 main.go:141] libmachine: (addons-990895) DBG | found existing default KVM network
	I0719 18:15:12.931602   23130 main.go:141] libmachine: (addons-990895) DBG | I0719 18:15:12.931477   23151 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d990}
	I0719 18:15:12.931629   23130 main.go:141] libmachine: (addons-990895) DBG | created network xml: 
	I0719 18:15:12.931643   23130 main.go:141] libmachine: (addons-990895) DBG | <network>
	I0719 18:15:12.931656   23130 main.go:141] libmachine: (addons-990895) DBG |   <name>mk-addons-990895</name>
	I0719 18:15:12.931666   23130 main.go:141] libmachine: (addons-990895) DBG |   <dns enable='no'/>
	I0719 18:15:12.931675   23130 main.go:141] libmachine: (addons-990895) DBG |   
	I0719 18:15:12.931682   23130 main.go:141] libmachine: (addons-990895) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0719 18:15:12.931700   23130 main.go:141] libmachine: (addons-990895) DBG |     <dhcp>
	I0719 18:15:12.931709   23130 main.go:141] libmachine: (addons-990895) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0719 18:15:12.931716   23130 main.go:141] libmachine: (addons-990895) DBG |     </dhcp>
	I0719 18:15:12.931721   23130 main.go:141] libmachine: (addons-990895) DBG |   </ip>
	I0719 18:15:12.931731   23130 main.go:141] libmachine: (addons-990895) DBG |   
	I0719 18:15:12.931741   23130 main.go:141] libmachine: (addons-990895) DBG | </network>
	I0719 18:15:12.931751   23130 main.go:141] libmachine: (addons-990895) DBG | 
	I0719 18:15:12.936991   23130 main.go:141] libmachine: (addons-990895) DBG | trying to create private KVM network mk-addons-990895 192.168.39.0/24...
	I0719 18:15:12.999137   23130 main.go:141] libmachine: (addons-990895) DBG | private KVM network mk-addons-990895 192.168.39.0/24 created
	I0719 18:15:12.999170   23130 main.go:141] libmachine: (addons-990895) Setting up store path in /home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895 ...
	I0719 18:15:12.999197   23130 main.go:141] libmachine: (addons-990895) DBG | I0719 18:15:12.999100   23151 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 18:15:12.999210   23130 main.go:141] libmachine: (addons-990895) Building disk image from file:///home/jenkins/minikube-integration/19307-14841/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 18:15:12.999247   23130 main.go:141] libmachine: (addons-990895) Downloading /home/jenkins/minikube-integration/19307-14841/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19307-14841/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 18:15:13.253348   23130 main.go:141] libmachine: (addons-990895) DBG | I0719 18:15:13.253206   23151 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa...
	I0719 18:15:13.542394   23130 main.go:141] libmachine: (addons-990895) DBG | I0719 18:15:13.542268   23151 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/addons-990895.rawdisk...
	I0719 18:15:13.542436   23130 main.go:141] libmachine: (addons-990895) DBG | Writing magic tar header
	I0719 18:15:13.542445   23130 main.go:141] libmachine: (addons-990895) DBG | Writing SSH key tar header
	I0719 18:15:13.542460   23130 main.go:141] libmachine: (addons-990895) DBG | I0719 18:15:13.542389   23151 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895 ...
	I0719 18:15:13.542471   23130 main.go:141] libmachine: (addons-990895) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895
	I0719 18:15:13.542537   23130 main.go:141] libmachine: (addons-990895) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895 (perms=drwx------)
	I0719 18:15:13.542575   23130 main.go:141] libmachine: (addons-990895) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841/.minikube/machines (perms=drwxr-xr-x)
	I0719 18:15:13.542586   23130 main.go:141] libmachine: (addons-990895) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841/.minikube/machines
	I0719 18:15:13.542595   23130 main.go:141] libmachine: (addons-990895) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 18:15:13.542601   23130 main.go:141] libmachine: (addons-990895) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841
	I0719 18:15:13.542609   23130 main.go:141] libmachine: (addons-990895) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0719 18:15:13.542615   23130 main.go:141] libmachine: (addons-990895) DBG | Checking permissions on dir: /home/jenkins
	I0719 18:15:13.542625   23130 main.go:141] libmachine: (addons-990895) DBG | Checking permissions on dir: /home
	I0719 18:15:13.542634   23130 main.go:141] libmachine: (addons-990895) DBG | Skipping /home - not owner
	I0719 18:15:13.542645   23130 main.go:141] libmachine: (addons-990895) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841/.minikube (perms=drwxr-xr-x)
	I0719 18:15:13.542661   23130 main.go:141] libmachine: (addons-990895) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841 (perms=drwxrwxr-x)
	I0719 18:15:13.542670   23130 main.go:141] libmachine: (addons-990895) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0719 18:15:13.542689   23130 main.go:141] libmachine: (addons-990895) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0719 18:15:13.542700   23130 main.go:141] libmachine: (addons-990895) Creating domain...
	I0719 18:15:13.543651   23130 main.go:141] libmachine: (addons-990895) define libvirt domain using xml: 
	I0719 18:15:13.543669   23130 main.go:141] libmachine: (addons-990895) <domain type='kvm'>
	I0719 18:15:13.543679   23130 main.go:141] libmachine: (addons-990895)   <name>addons-990895</name>
	I0719 18:15:13.543687   23130 main.go:141] libmachine: (addons-990895)   <memory unit='MiB'>4000</memory>
	I0719 18:15:13.543696   23130 main.go:141] libmachine: (addons-990895)   <vcpu>2</vcpu>
	I0719 18:15:13.543702   23130 main.go:141] libmachine: (addons-990895)   <features>
	I0719 18:15:13.543709   23130 main.go:141] libmachine: (addons-990895)     <acpi/>
	I0719 18:15:13.543712   23130 main.go:141] libmachine: (addons-990895)     <apic/>
	I0719 18:15:13.543728   23130 main.go:141] libmachine: (addons-990895)     <pae/>
	I0719 18:15:13.543736   23130 main.go:141] libmachine: (addons-990895)     
	I0719 18:15:13.543741   23130 main.go:141] libmachine: (addons-990895)   </features>
	I0719 18:15:13.543746   23130 main.go:141] libmachine: (addons-990895)   <cpu mode='host-passthrough'>
	I0719 18:15:13.543751   23130 main.go:141] libmachine: (addons-990895)   
	I0719 18:15:13.543759   23130 main.go:141] libmachine: (addons-990895)   </cpu>
	I0719 18:15:13.543764   23130 main.go:141] libmachine: (addons-990895)   <os>
	I0719 18:15:13.543770   23130 main.go:141] libmachine: (addons-990895)     <type>hvm</type>
	I0719 18:15:13.543776   23130 main.go:141] libmachine: (addons-990895)     <boot dev='cdrom'/>
	I0719 18:15:13.543781   23130 main.go:141] libmachine: (addons-990895)     <boot dev='hd'/>
	I0719 18:15:13.543786   23130 main.go:141] libmachine: (addons-990895)     <bootmenu enable='no'/>
	I0719 18:15:13.543792   23130 main.go:141] libmachine: (addons-990895)   </os>
	I0719 18:15:13.543797   23130 main.go:141] libmachine: (addons-990895)   <devices>
	I0719 18:15:13.543808   23130 main.go:141] libmachine: (addons-990895)     <disk type='file' device='cdrom'>
	I0719 18:15:13.543818   23130 main.go:141] libmachine: (addons-990895)       <source file='/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/boot2docker.iso'/>
	I0719 18:15:13.543825   23130 main.go:141] libmachine: (addons-990895)       <target dev='hdc' bus='scsi'/>
	I0719 18:15:13.543842   23130 main.go:141] libmachine: (addons-990895)       <readonly/>
	I0719 18:15:13.543856   23130 main.go:141] libmachine: (addons-990895)     </disk>
	I0719 18:15:13.543880   23130 main.go:141] libmachine: (addons-990895)     <disk type='file' device='disk'>
	I0719 18:15:13.543913   23130 main.go:141] libmachine: (addons-990895)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0719 18:15:13.543930   23130 main.go:141] libmachine: (addons-990895)       <source file='/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/addons-990895.rawdisk'/>
	I0719 18:15:13.543938   23130 main.go:141] libmachine: (addons-990895)       <target dev='hda' bus='virtio'/>
	I0719 18:15:13.543944   23130 main.go:141] libmachine: (addons-990895)     </disk>
	I0719 18:15:13.543951   23130 main.go:141] libmachine: (addons-990895)     <interface type='network'>
	I0719 18:15:13.543958   23130 main.go:141] libmachine: (addons-990895)       <source network='mk-addons-990895'/>
	I0719 18:15:13.543965   23130 main.go:141] libmachine: (addons-990895)       <model type='virtio'/>
	I0719 18:15:13.543990   23130 main.go:141] libmachine: (addons-990895)     </interface>
	I0719 18:15:13.544012   23130 main.go:141] libmachine: (addons-990895)     <interface type='network'>
	I0719 18:15:13.544026   23130 main.go:141] libmachine: (addons-990895)       <source network='default'/>
	I0719 18:15:13.544036   23130 main.go:141] libmachine: (addons-990895)       <model type='virtio'/>
	I0719 18:15:13.544050   23130 main.go:141] libmachine: (addons-990895)     </interface>
	I0719 18:15:13.544060   23130 main.go:141] libmachine: (addons-990895)     <serial type='pty'>
	I0719 18:15:13.544070   23130 main.go:141] libmachine: (addons-990895)       <target port='0'/>
	I0719 18:15:13.544085   23130 main.go:141] libmachine: (addons-990895)     </serial>
	I0719 18:15:13.544097   23130 main.go:141] libmachine: (addons-990895)     <console type='pty'>
	I0719 18:15:13.544107   23130 main.go:141] libmachine: (addons-990895)       <target type='serial' port='0'/>
	I0719 18:15:13.544117   23130 main.go:141] libmachine: (addons-990895)     </console>
	I0719 18:15:13.544127   23130 main.go:141] libmachine: (addons-990895)     <rng model='virtio'>
	I0719 18:15:13.544139   23130 main.go:141] libmachine: (addons-990895)       <backend model='random'>/dev/random</backend>
	I0719 18:15:13.544153   23130 main.go:141] libmachine: (addons-990895)     </rng>
	I0719 18:15:13.544164   23130 main.go:141] libmachine: (addons-990895)     
	I0719 18:15:13.544175   23130 main.go:141] libmachine: (addons-990895)     
	I0719 18:15:13.544186   23130 main.go:141] libmachine: (addons-990895)   </devices>
	I0719 18:15:13.544196   23130 main.go:141] libmachine: (addons-990895) </domain>
	I0719 18:15:13.544210   23130 main.go:141] libmachine: (addons-990895) 
	I0719 18:15:13.549953   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:00:3a:78 in network default
	I0719 18:15:13.550604   23130 main.go:141] libmachine: (addons-990895) Ensuring networks are active...
	I0719 18:15:13.550623   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:13.551190   23130 main.go:141] libmachine: (addons-990895) Ensuring network default is active
	I0719 18:15:13.551487   23130 main.go:141] libmachine: (addons-990895) Ensuring network mk-addons-990895 is active
	I0719 18:15:13.551978   23130 main.go:141] libmachine: (addons-990895) Getting domain xml...
	I0719 18:15:13.552632   23130 main.go:141] libmachine: (addons-990895) Creating domain...
	I0719 18:15:14.957544   23130 main.go:141] libmachine: (addons-990895) Waiting to get IP...
	I0719 18:15:14.958478   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:14.958923   23130 main.go:141] libmachine: (addons-990895) DBG | unable to find current IP address of domain addons-990895 in network mk-addons-990895
	I0719 18:15:14.958962   23130 main.go:141] libmachine: (addons-990895) DBG | I0719 18:15:14.958915   23151 retry.go:31] will retry after 253.11897ms: waiting for machine to come up
	I0719 18:15:15.213417   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:15.213856   23130 main.go:141] libmachine: (addons-990895) DBG | unable to find current IP address of domain addons-990895 in network mk-addons-990895
	I0719 18:15:15.213883   23130 main.go:141] libmachine: (addons-990895) DBG | I0719 18:15:15.213809   23151 retry.go:31] will retry after 301.657933ms: waiting for machine to come up
	I0719 18:15:15.516593   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:15.517045   23130 main.go:141] libmachine: (addons-990895) DBG | unable to find current IP address of domain addons-990895 in network mk-addons-990895
	I0719 18:15:15.517076   23130 main.go:141] libmachine: (addons-990895) DBG | I0719 18:15:15.516993   23151 retry.go:31] will retry after 344.420609ms: waiting for machine to come up
	I0719 18:15:15.863556   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:15.863982   23130 main.go:141] libmachine: (addons-990895) DBG | unable to find current IP address of domain addons-990895 in network mk-addons-990895
	I0719 18:15:15.864006   23130 main.go:141] libmachine: (addons-990895) DBG | I0719 18:15:15.863925   23151 retry.go:31] will retry after 515.563894ms: waiting for machine to come up
	I0719 18:15:16.380427   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:16.380964   23130 main.go:141] libmachine: (addons-990895) DBG | unable to find current IP address of domain addons-990895 in network mk-addons-990895
	I0719 18:15:16.380996   23130 main.go:141] libmachine: (addons-990895) DBG | I0719 18:15:16.380891   23151 retry.go:31] will retry after 477.23814ms: waiting for machine to come up
	I0719 18:15:16.859648   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:16.860113   23130 main.go:141] libmachine: (addons-990895) DBG | unable to find current IP address of domain addons-990895 in network mk-addons-990895
	I0719 18:15:16.860140   23130 main.go:141] libmachine: (addons-990895) DBG | I0719 18:15:16.860056   23151 retry.go:31] will retry after 720.689928ms: waiting for machine to come up
	I0719 18:15:17.581910   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:17.582390   23130 main.go:141] libmachine: (addons-990895) DBG | unable to find current IP address of domain addons-990895 in network mk-addons-990895
	I0719 18:15:17.582418   23130 main.go:141] libmachine: (addons-990895) DBG | I0719 18:15:17.582340   23151 retry.go:31] will retry after 1.039280451s: waiting for machine to come up
	I0719 18:15:18.622660   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:18.623063   23130 main.go:141] libmachine: (addons-990895) DBG | unable to find current IP address of domain addons-990895 in network mk-addons-990895
	I0719 18:15:18.623103   23130 main.go:141] libmachine: (addons-990895) DBG | I0719 18:15:18.623020   23151 retry.go:31] will retry after 1.06682594s: waiting for machine to come up
	I0719 18:15:19.691356   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:19.691787   23130 main.go:141] libmachine: (addons-990895) DBG | unable to find current IP address of domain addons-990895 in network mk-addons-990895
	I0719 18:15:19.691825   23130 main.go:141] libmachine: (addons-990895) DBG | I0719 18:15:19.691750   23151 retry.go:31] will retry after 1.440824705s: waiting for machine to come up
	I0719 18:15:21.133643   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:21.134051   23130 main.go:141] libmachine: (addons-990895) DBG | unable to find current IP address of domain addons-990895 in network mk-addons-990895
	I0719 18:15:21.134075   23130 main.go:141] libmachine: (addons-990895) DBG | I0719 18:15:21.134026   23151 retry.go:31] will retry after 2.215935281s: waiting for machine to come up
	I0719 18:15:23.351043   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:23.351612   23130 main.go:141] libmachine: (addons-990895) DBG | unable to find current IP address of domain addons-990895 in network mk-addons-990895
	I0719 18:15:23.351635   23130 main.go:141] libmachine: (addons-990895) DBG | I0719 18:15:23.351552   23151 retry.go:31] will retry after 2.178016178s: waiting for machine to come up
	I0719 18:15:25.531912   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:25.532303   23130 main.go:141] libmachine: (addons-990895) DBG | unable to find current IP address of domain addons-990895 in network mk-addons-990895
	I0719 18:15:25.532355   23130 main.go:141] libmachine: (addons-990895) DBG | I0719 18:15:25.532253   23151 retry.go:31] will retry after 3.627193721s: waiting for machine to come up
	I0719 18:15:29.160706   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:29.161136   23130 main.go:141] libmachine: (addons-990895) DBG | unable to find current IP address of domain addons-990895 in network mk-addons-990895
	I0719 18:15:29.161173   23130 main.go:141] libmachine: (addons-990895) DBG | I0719 18:15:29.161093   23151 retry.go:31] will retry after 4.493345828s: waiting for machine to come up
	I0719 18:15:33.656508   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:33.656961   23130 main.go:141] libmachine: (addons-990895) Found IP for machine: 192.168.39.239
	I0719 18:15:33.656980   23130 main.go:141] libmachine: (addons-990895) Reserving static IP address...
	I0719 18:15:33.656991   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has current primary IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:33.657395   23130 main.go:141] libmachine: (addons-990895) DBG | unable to find host DHCP lease matching {name: "addons-990895", mac: "52:54:00:15:90:82", ip: "192.168.39.239"} in network mk-addons-990895
	I0719 18:15:33.726483   23130 main.go:141] libmachine: (addons-990895) Reserved static IP address: 192.168.39.239
	I0719 18:15:33.726516   23130 main.go:141] libmachine: (addons-990895) Waiting for SSH to be available...
	I0719 18:15:33.726532   23130 main.go:141] libmachine: (addons-990895) DBG | Getting to WaitForSSH function...
	I0719 18:15:33.729220   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:33.729693   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:minikube Clientid:01:52:54:00:15:90:82}
	I0719 18:15:33.729726   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:33.729911   23130 main.go:141] libmachine: (addons-990895) DBG | Using SSH client type: external
	I0719 18:15:33.729935   23130 main.go:141] libmachine: (addons-990895) DBG | Using SSH private key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa (-rw-------)
	I0719 18:15:33.729961   23130 main.go:141] libmachine: (addons-990895) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.239 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 18:15:33.729972   23130 main.go:141] libmachine: (addons-990895) DBG | About to run SSH command:
	I0719 18:15:33.729980   23130 main.go:141] libmachine: (addons-990895) DBG | exit 0
	I0719 18:15:33.867921   23130 main.go:141] libmachine: (addons-990895) DBG | SSH cmd err, output: <nil>: 
	I0719 18:15:33.868216   23130 main.go:141] libmachine: (addons-990895) KVM machine creation complete!
	I0719 18:15:33.868448   23130 main.go:141] libmachine: (addons-990895) Calling .GetConfigRaw
	I0719 18:15:33.868986   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:15:33.869162   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:15:33.869340   23130 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0719 18:15:33.869354   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:15:33.870449   23130 main.go:141] libmachine: Detecting operating system of created instance...
	I0719 18:15:33.870462   23130 main.go:141] libmachine: Waiting for SSH to be available...
	I0719 18:15:33.870468   23130 main.go:141] libmachine: Getting to WaitForSSH function...
	I0719 18:15:33.870474   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:15:33.872587   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:33.872980   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:15:33.873006   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:33.873136   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:15:33.873271   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:15:33.873415   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:15:33.873519   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:15:33.873661   23130 main.go:141] libmachine: Using SSH client type: native
	I0719 18:15:33.873839   23130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0719 18:15:33.873849   23130 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0719 18:15:33.971162   23130 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 18:15:33.971183   23130 main.go:141] libmachine: Detecting the provisioner...
	I0719 18:15:33.971193   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:15:33.974045   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:33.974403   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:15:33.974426   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:33.974587   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:15:33.974757   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:15:33.974900   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:15:33.975109   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:15:33.975307   23130 main.go:141] libmachine: Using SSH client type: native
	I0719 18:15:33.975563   23130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0719 18:15:33.975577   23130 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0719 18:15:34.072309   23130 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0719 18:15:34.072407   23130 main.go:141] libmachine: found compatible host: buildroot
	I0719 18:15:34.072421   23130 main.go:141] libmachine: Provisioning with buildroot...
	I0719 18:15:34.072433   23130 main.go:141] libmachine: (addons-990895) Calling .GetMachineName
	I0719 18:15:34.072797   23130 buildroot.go:166] provisioning hostname "addons-990895"
	I0719 18:15:34.072821   23130 main.go:141] libmachine: (addons-990895) Calling .GetMachineName
	I0719 18:15:34.073037   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:15:34.075464   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:34.075810   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:15:34.075831   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:34.076035   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:15:34.076205   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:15:34.076365   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:15:34.076518   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:15:34.076691   23130 main.go:141] libmachine: Using SSH client type: native
	I0719 18:15:34.076863   23130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0719 18:15:34.076875   23130 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-990895 && echo "addons-990895" | sudo tee /etc/hostname
	I0719 18:15:34.189485   23130 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-990895
	
	I0719 18:15:34.189526   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:15:34.192401   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:34.192736   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:15:34.192778   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:34.192899   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:15:34.193105   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:15:34.193286   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:15:34.193428   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:15:34.193590   23130 main.go:141] libmachine: Using SSH client type: native
	I0719 18:15:34.193754   23130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0719 18:15:34.193768   23130 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-990895' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-990895/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-990895' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 18:15:34.299726   23130 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 18:15:34.299762   23130 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 18:15:34.299795   23130 buildroot.go:174] setting up certificates
	I0719 18:15:34.299808   23130 provision.go:84] configureAuth start
	I0719 18:15:34.299827   23130 main.go:141] libmachine: (addons-990895) Calling .GetMachineName
	I0719 18:15:34.300161   23130 main.go:141] libmachine: (addons-990895) Calling .GetIP
	I0719 18:15:34.302973   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:34.303339   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:15:34.303379   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:34.303557   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:15:34.305877   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:34.306295   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:15:34.306318   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:34.306506   23130 provision.go:143] copyHostCerts
	I0719 18:15:34.306570   23130 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 18:15:34.306692   23130 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 18:15:34.306771   23130 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 18:15:34.306827   23130 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.addons-990895 san=[127.0.0.1 192.168.39.239 addons-990895 localhost minikube]
	I0719 18:15:34.568397   23130 provision.go:177] copyRemoteCerts
	I0719 18:15:34.568459   23130 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 18:15:34.568491   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:15:34.571014   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:34.571279   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:15:34.571308   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:34.571457   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:15:34.571653   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:15:34.571793   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:15:34.571931   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	I0719 18:15:34.649873   23130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 18:15:34.672039   23130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0719 18:15:34.694665   23130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 18:15:34.716480   23130 provision.go:87] duration metric: took 416.656805ms to configureAuth
	I0719 18:15:34.716502   23130 buildroot.go:189] setting minikube options for container-runtime
	I0719 18:15:34.716679   23130 config.go:182] Loaded profile config "addons-990895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:15:34.716762   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:15:34.719980   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:34.720467   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:15:34.720493   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:34.720767   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:15:34.720997   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:15:34.721163   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:15:34.721324   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:15:34.721521   23130 main.go:141] libmachine: Using SSH client type: native
	I0719 18:15:34.721724   23130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0719 18:15:34.721741   23130 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 18:15:34.964357   23130 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 18:15:34.964391   23130 main.go:141] libmachine: Checking connection to Docker...
	I0719 18:15:34.964402   23130 main.go:141] libmachine: (addons-990895) Calling .GetURL
	I0719 18:15:34.965835   23130 main.go:141] libmachine: (addons-990895) DBG | Using libvirt version 6000000
	I0719 18:15:34.968099   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:34.968571   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:15:34.968605   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:34.968738   23130 main.go:141] libmachine: Docker is up and running!
	I0719 18:15:34.968752   23130 main.go:141] libmachine: Reticulating splines...
	I0719 18:15:34.968759   23130 client.go:171] duration metric: took 22.230753522s to LocalClient.Create
	I0719 18:15:34.968785   23130 start.go:167] duration metric: took 22.230813085s to libmachine.API.Create "addons-990895"
	I0719 18:15:34.968797   23130 start.go:293] postStartSetup for "addons-990895" (driver="kvm2")
	I0719 18:15:34.968811   23130 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 18:15:34.968835   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:15:34.969073   23130 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 18:15:34.969096   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:15:34.971357   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:34.971649   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:15:34.971672   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:34.971821   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:15:34.972051   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:15:34.972194   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:15:34.972327   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	I0719 18:15:35.050527   23130 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 18:15:35.054591   23130 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 18:15:35.054621   23130 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 18:15:35.054697   23130 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 18:15:35.054721   23130 start.go:296] duration metric: took 85.917302ms for postStartSetup
	I0719 18:15:35.054751   23130 main.go:141] libmachine: (addons-990895) Calling .GetConfigRaw
	I0719 18:15:35.055300   23130 main.go:141] libmachine: (addons-990895) Calling .GetIP
	I0719 18:15:35.057772   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:35.058064   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:15:35.058092   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:35.058350   23130 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/config.json ...
	I0719 18:15:35.058551   23130 start.go:128] duration metric: took 22.338520539s to createHost
	I0719 18:15:35.058577   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:15:35.060842   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:35.061161   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:15:35.061182   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:35.061332   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:15:35.061494   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:15:35.061628   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:15:35.061774   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:15:35.061924   23130 main.go:141] libmachine: Using SSH client type: native
	I0719 18:15:35.062118   23130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0719 18:15:35.062132   23130 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 18:15:35.160508   23130 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721412935.136749392
	
	I0719 18:15:35.160531   23130 fix.go:216] guest clock: 1721412935.136749392
	I0719 18:15:35.160539   23130 fix.go:229] Guest: 2024-07-19 18:15:35.136749392 +0000 UTC Remote: 2024-07-19 18:15:35.058563864 +0000 UTC m=+22.435503273 (delta=78.185528ms)
	I0719 18:15:35.160576   23130 fix.go:200] guest clock delta is within tolerance: 78.185528ms
	I0719 18:15:35.160595   23130 start.go:83] releasing machines lock for "addons-990895", held for 22.440634848s
	I0719 18:15:35.160622   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:15:35.160868   23130 main.go:141] libmachine: (addons-990895) Calling .GetIP
	I0719 18:15:35.163552   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:35.163905   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:15:35.163932   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:35.164061   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:15:35.164558   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:15:35.164750   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:15:35.164890   23130 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 18:15:35.164941   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:15:35.164971   23130 ssh_runner.go:195] Run: cat /version.json
	I0719 18:15:35.164993   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:15:35.167616   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:35.167908   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:15:35.167945   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:35.168006   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:35.168083   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:15:35.168241   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:15:35.168391   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:15:35.168428   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:15:35.168451   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:35.168550   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	I0719 18:15:35.168882   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:15:35.169036   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:15:35.169196   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:15:35.169360   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	I0719 18:15:35.274442   23130 ssh_runner.go:195] Run: systemctl --version
	I0719 18:15:35.279962   23130 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 18:15:35.436149   23130 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 18:15:35.443206   23130 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 18:15:35.443271   23130 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 18:15:35.459973   23130 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 18:15:35.460002   23130 start.go:495] detecting cgroup driver to use...
	I0719 18:15:35.460066   23130 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 18:15:35.474898   23130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 18:15:35.487478   23130 docker.go:217] disabling cri-docker service (if available) ...
	I0719 18:15:35.487535   23130 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 18:15:35.500053   23130 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 18:15:35.512231   23130 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 18:15:35.624844   23130 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 18:15:35.762796   23130 docker.go:233] disabling docker service ...
	I0719 18:15:35.762871   23130 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 18:15:35.776361   23130 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 18:15:35.788829   23130 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 18:15:35.913323   23130 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 18:15:36.028092   23130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 18:15:36.041197   23130 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 18:15:36.058208   23130 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 18:15:36.058264   23130 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:15:36.067481   23130 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 18:15:36.067546   23130 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:15:36.077090   23130 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:15:36.086563   23130 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:15:36.096146   23130 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 18:15:36.105794   23130 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:15:36.115580   23130 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:15:36.130970   23130 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:15:36.140464   23130 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 18:15:36.148842   23130 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 18:15:36.148889   23130 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 18:15:36.160381   23130 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 18:15:36.168764   23130 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 18:15:36.272945   23130 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 18:15:36.399624   23130 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 18:15:36.399716   23130 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 18:15:36.404043   23130 start.go:563] Will wait 60s for crictl version
	I0719 18:15:36.404111   23130 ssh_runner.go:195] Run: which crictl
	I0719 18:15:36.407401   23130 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 18:15:36.443762   23130 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 18:15:36.443907   23130 ssh_runner.go:195] Run: crio --version
	I0719 18:15:36.470011   23130 ssh_runner.go:195] Run: crio --version
	I0719 18:15:36.497687   23130 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 18:15:36.499071   23130 main.go:141] libmachine: (addons-990895) Calling .GetIP
	I0719 18:15:36.501548   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:36.501893   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:15:36.501923   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:15:36.502132   23130 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 18:15:36.506274   23130 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 18:15:36.517497   23130 kubeadm.go:883] updating cluster {Name:addons-990895 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-990895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.239 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 18:15:36.517610   23130 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 18:15:36.517652   23130 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 18:15:36.546231   23130 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0719 18:15:36.546303   23130 ssh_runner.go:195] Run: which lz4
	I0719 18:15:36.549860   23130 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 18:15:36.554411   23130 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 18:15:36.554434   23130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0719 18:15:37.689583   23130 crio.go:462] duration metric: took 1.139751218s to copy over tarball
	I0719 18:15:37.689658   23130 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 18:15:39.891318   23130 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.201628814s)
	I0719 18:15:39.891355   23130 crio.go:469] duration metric: took 2.201746484s to extract the tarball
	I0719 18:15:39.891366   23130 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 18:15:39.942005   23130 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 18:15:39.984400   23130 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 18:15:39.984424   23130 cache_images.go:84] Images are preloaded, skipping loading
	I0719 18:15:39.984434   23130 kubeadm.go:934] updating node { 192.168.39.239 8443 v1.30.3 crio true true} ...
	I0719 18:15:39.984570   23130 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-990895 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.239
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-990895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 18:15:39.984657   23130 ssh_runner.go:195] Run: crio config
	I0719 18:15:40.031283   23130 cni.go:84] Creating CNI manager for ""
	I0719 18:15:40.031303   23130 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 18:15:40.031315   23130 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 18:15:40.031339   23130 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.239 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-990895 NodeName:addons-990895 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.239"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.239 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 18:15:40.031475   23130 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.239
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-990895"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.239
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.239"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 18:15:40.031535   23130 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 18:15:40.040582   23130 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 18:15:40.040653   23130 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 18:15:40.049121   23130 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0719 18:15:40.064016   23130 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 18:15:40.078515   23130 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0719 18:15:40.092955   23130 ssh_runner.go:195] Run: grep 192.168.39.239	control-plane.minikube.internal$ /etc/hosts
	I0719 18:15:40.096256   23130 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.239	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 18:15:40.107024   23130 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 18:15:40.220066   23130 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 18:15:40.235629   23130 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895 for IP: 192.168.39.239
	I0719 18:15:40.235658   23130 certs.go:194] generating shared ca certs ...
	I0719 18:15:40.235677   23130 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:15:40.235858   23130 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 18:15:40.336611   23130 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt ...
	I0719 18:15:40.336639   23130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt: {Name:mk27146b57328cab1c604a664f4363e78005c7c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:15:40.336804   23130 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key ...
	I0719 18:15:40.336815   23130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key: {Name:mk6755898b6f2bbefdc4f70789d7f9202127771b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:15:40.336915   23130 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 18:15:40.610789   23130 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt ...
	I0719 18:15:40.610815   23130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt: {Name:mk6e8498ab120800dd3d9c1317094f77a9ea2c15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:15:40.610971   23130 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key ...
	I0719 18:15:40.610982   23130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key: {Name:mk011c1c36825827521b2335be839910ab841162 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:15:40.611045   23130 certs.go:256] generating profile certs ...
	I0719 18:15:40.611094   23130 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.key
	I0719 18:15:40.611107   23130 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt with IP's: []
	I0719 18:15:40.988492   23130 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt ...
	I0719 18:15:40.988523   23130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: {Name:mk0b9ad5430de0ecaa4bc22d0cba626c3711c0be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:15:40.988679   23130 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.key ...
	I0719 18:15:40.988690   23130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.key: {Name:mk2d8c7920bcdee37d6f657890a1810a1bb0536e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:15:40.988757   23130 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/apiserver.key.d17ebd02
	I0719 18:15:40.988774   23130 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/apiserver.crt.d17ebd02 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.239]
	I0719 18:15:41.330135   23130 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/apiserver.crt.d17ebd02 ...
	I0719 18:15:41.330164   23130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/apiserver.crt.d17ebd02: {Name:mkab0302e3d233887146ad77168048a8f6f68639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:15:41.330324   23130 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/apiserver.key.d17ebd02 ...
	I0719 18:15:41.330337   23130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/apiserver.key.d17ebd02: {Name:mk9119c7f1277d15d206e380b7556c1b61dc9bd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:15:41.330406   23130 certs.go:381] copying /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/apiserver.crt.d17ebd02 -> /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/apiserver.crt
	I0719 18:15:41.330474   23130 certs.go:385] copying /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/apiserver.key.d17ebd02 -> /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/apiserver.key
	I0719 18:15:41.330518   23130 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/proxy-client.key
	I0719 18:15:41.330534   23130 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/proxy-client.crt with IP's: []
	I0719 18:15:41.506283   23130 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/proxy-client.crt ...
	I0719 18:15:41.506310   23130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/proxy-client.crt: {Name:mk4e7cacfea97232ffdbf1aa0b96fb892e66dae1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:15:41.506460   23130 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/proxy-client.key ...
	I0719 18:15:41.506471   23130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/proxy-client.key: {Name:mkd03c28824febcacc9781048345473e8b2394e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:15:41.506625   23130 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 18:15:41.506656   23130 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 18:15:41.506675   23130 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 18:15:41.506696   23130 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 18:15:41.507246   23130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 18:15:41.531408   23130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 18:15:41.554667   23130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 18:15:41.576703   23130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 18:15:41.598385   23130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0719 18:15:41.620492   23130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 18:15:41.642746   23130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 18:15:41.664215   23130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 18:15:41.686011   23130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 18:15:41.707904   23130 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 18:15:41.723357   23130 ssh_runner.go:195] Run: openssl version
	I0719 18:15:41.728879   23130 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 18:15:41.738798   23130 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 18:15:41.742819   23130 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 18:15:41.742881   23130 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 18:15:41.748367   23130 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 18:15:41.758508   23130 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 18:15:41.762042   23130 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 18:15:41.762096   23130 kubeadm.go:392] StartCluster: {Name:addons-990895 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-990895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.239 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 18:15:41.762195   23130 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 18:15:41.762247   23130 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 18:15:41.798255   23130 cri.go:89] found id: ""
	I0719 18:15:41.798329   23130 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 18:15:41.809999   23130 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 18:15:41.828880   23130 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 18:15:41.844187   23130 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 18:15:41.844204   23130 kubeadm.go:157] found existing configuration files:
	
	I0719 18:15:41.844253   23130 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 18:15:41.854677   23130 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 18:15:41.854722   23130 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 18:15:41.867637   23130 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 18:15:41.875932   23130 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 18:15:41.875989   23130 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 18:15:41.884739   23130 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 18:15:41.893097   23130 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 18:15:41.893157   23130 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 18:15:41.901726   23130 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 18:15:41.909866   23130 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 18:15:41.909921   23130 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 18:15:41.918234   23130 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 18:15:42.098735   23130 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 18:15:50.932390   23130 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0719 18:15:50.932458   23130 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 18:15:50.932573   23130 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 18:15:50.932706   23130 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 18:15:50.932876   23130 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 18:15:50.932967   23130 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 18:15:50.934209   23130 out.go:204]   - Generating certificates and keys ...
	I0719 18:15:50.934281   23130 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 18:15:50.934345   23130 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 18:15:50.934445   23130 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0719 18:15:50.934508   23130 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0719 18:15:50.934564   23130 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0719 18:15:50.934609   23130 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0719 18:15:50.934677   23130 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0719 18:15:50.934835   23130 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-990895 localhost] and IPs [192.168.39.239 127.0.0.1 ::1]
	I0719 18:15:50.934910   23130 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0719 18:15:50.935067   23130 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-990895 localhost] and IPs [192.168.39.239 127.0.0.1 ::1]
	I0719 18:15:50.935152   23130 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0719 18:15:50.935245   23130 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0719 18:15:50.935317   23130 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0719 18:15:50.935397   23130 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 18:15:50.935472   23130 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 18:15:50.935545   23130 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 18:15:50.935620   23130 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 18:15:50.935711   23130 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 18:15:50.935787   23130 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 18:15:50.935919   23130 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 18:15:50.936041   23130 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 18:15:50.937240   23130 out.go:204]   - Booting up control plane ...
	I0719 18:15:50.937317   23130 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 18:15:50.937392   23130 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 18:15:50.937447   23130 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 18:15:50.937572   23130 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 18:15:50.937685   23130 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 18:15:50.937743   23130 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 18:15:50.937889   23130 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 18:15:50.937977   23130 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 18:15:50.938049   23130 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.887458ms
	I0719 18:15:50.938126   23130 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 18:15:50.938181   23130 kubeadm.go:310] [api-check] The API server is healthy after 4.506542904s
	I0719 18:15:50.938282   23130 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 18:15:50.938422   23130 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 18:15:50.938507   23130 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 18:15:50.938690   23130 kubeadm.go:310] [mark-control-plane] Marking the node addons-990895 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 18:15:50.938772   23130 kubeadm.go:310] [bootstrap-token] Using token: s59p6j.odilrpyh2nf3e6fm
	I0719 18:15:50.939990   23130 out.go:204]   - Configuring RBAC rules ...
	I0719 18:15:50.940102   23130 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 18:15:50.940180   23130 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 18:15:50.940298   23130 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 18:15:50.940403   23130 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 18:15:50.940496   23130 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 18:15:50.940571   23130 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 18:15:50.940691   23130 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 18:15:50.940733   23130 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 18:15:50.940772   23130 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 18:15:50.940778   23130 kubeadm.go:310] 
	I0719 18:15:50.940832   23130 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 18:15:50.940850   23130 kubeadm.go:310] 
	I0719 18:15:50.940917   23130 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 18:15:50.940924   23130 kubeadm.go:310] 
	I0719 18:15:50.940973   23130 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 18:15:50.941055   23130 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 18:15:50.941127   23130 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 18:15:50.941138   23130 kubeadm.go:310] 
	I0719 18:15:50.941203   23130 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 18:15:50.941217   23130 kubeadm.go:310] 
	I0719 18:15:50.941270   23130 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 18:15:50.941277   23130 kubeadm.go:310] 
	I0719 18:15:50.941320   23130 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 18:15:50.941387   23130 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 18:15:50.941449   23130 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 18:15:50.941455   23130 kubeadm.go:310] 
	I0719 18:15:50.941533   23130 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 18:15:50.941598   23130 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 18:15:50.941604   23130 kubeadm.go:310] 
	I0719 18:15:50.941701   23130 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token s59p6j.odilrpyh2nf3e6fm \
	I0719 18:15:50.941795   23130 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 \
	I0719 18:15:50.941814   23130 kubeadm.go:310] 	--control-plane 
	I0719 18:15:50.941820   23130 kubeadm.go:310] 
	I0719 18:15:50.941900   23130 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 18:15:50.941909   23130 kubeadm.go:310] 
	I0719 18:15:50.941984   23130 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token s59p6j.odilrpyh2nf3e6fm \
	I0719 18:15:50.942078   23130 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 
	I0719 18:15:50.942086   23130 cni.go:84] Creating CNI manager for ""
	I0719 18:15:50.942093   23130 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 18:15:50.943397   23130 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 18:15:50.944436   23130 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 18:15:50.954378   23130 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 18:15:50.971087   23130 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 18:15:50.971197   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:15:50.971208   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-990895 minikube.k8s.io/updated_at=2024_07_19T18_15_50_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221 minikube.k8s.io/name=addons-990895 minikube.k8s.io/primary=true
	I0719 18:15:50.997148   23130 ops.go:34] apiserver oom_adj: -16
	I0719 18:15:51.103714   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:15:51.604052   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:15:52.104723   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:15:52.604363   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:15:53.104422   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:15:53.603765   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:15:54.104309   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:15:54.604472   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:15:55.103979   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:15:55.604134   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:15:56.103798   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:15:56.604410   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:15:57.104186   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:15:57.604716   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:15:58.104023   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:15:58.603897   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:15:59.103828   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:15:59.604741   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:16:00.103930   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:16:00.603712   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:16:01.104591   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:16:01.604130   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:16:02.104106   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:16:02.604527   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:16:03.103861   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:16:03.604505   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:16:04.103829   23130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:16:04.213080   23130 kubeadm.go:1113] duration metric: took 13.241943482s to wait for elevateKubeSystemPrivileges
	I0719 18:16:04.213118   23130 kubeadm.go:394] duration metric: took 22.451027119s to StartCluster
	I0719 18:16:04.213141   23130 settings.go:142] acquiring lock: {Name:mkcf95271f2ac91a55c2f4ae0f8a0ccaa24777be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:16:04.213272   23130 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 18:16:04.213859   23130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/kubeconfig: {Name:mk0bbb5f0de0e816a151363ad4427bbf9de52f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:16:04.214080   23130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0719 18:16:04.214113   23130 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.239 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 18:16:04.214196   23130 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0719 18:16:04.214299   23130 config.go:182] Loaded profile config "addons-990895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:16:04.214314   23130 addons.go:69] Setting registry=true in profile "addons-990895"
	I0719 18:16:04.214298   23130 addons.go:69] Setting yakd=true in profile "addons-990895"
	I0719 18:16:04.214326   23130 addons.go:69] Setting default-storageclass=true in profile "addons-990895"
	I0719 18:16:04.214357   23130 addons.go:234] Setting addon registry=true in "addons-990895"
	I0719 18:16:04.214363   23130 addons.go:234] Setting addon yakd=true in "addons-990895"
	I0719 18:16:04.214352   23130 addons.go:69] Setting cloud-spanner=true in profile "addons-990895"
	I0719 18:16:04.214370   23130 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-990895"
	I0719 18:16:04.214306   23130 addons.go:69] Setting gcp-auth=true in profile "addons-990895"
	I0719 18:16:04.214395   23130 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-990895"
	I0719 18:16:04.214405   23130 mustload.go:65] Loading cluster: addons-990895
	I0719 18:16:04.214407   23130 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-990895"
	I0719 18:16:04.214408   23130 addons.go:234] Setting addon cloud-spanner=true in "addons-990895"
	I0719 18:16:04.214426   23130 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-990895"
	I0719 18:16:04.214449   23130 host.go:66] Checking if "addons-990895" exists ...
	I0719 18:16:04.214455   23130 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-990895"
	I0719 18:16:04.214462   23130 host.go:66] Checking if "addons-990895" exists ...
	I0719 18:16:04.214548   23130 config.go:182] Loaded profile config "addons-990895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:16:04.214813   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.214832   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.214830   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.214863   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.214883   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.214884   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.214917   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.214925   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.214973   23130 addons.go:69] Setting helm-tiller=true in profile "addons-990895"
	I0719 18:16:04.214974   23130 addons.go:69] Setting volumesnapshots=true in profile "addons-990895"
	I0719 18:16:04.215003   23130 addons.go:234] Setting addon helm-tiller=true in "addons-990895"
	I0719 18:16:04.215037   23130 addons.go:69] Setting ingress-dns=true in profile "addons-990895"
	I0719 18:16:04.215068   23130 addons.go:69] Setting metrics-server=true in profile "addons-990895"
	I0719 18:16:04.215042   23130 addons.go:69] Setting inspektor-gadget=true in profile "addons-990895"
	I0719 18:16:04.215110   23130 addons.go:234] Setting addon metrics-server=true in "addons-990895"
	I0719 18:16:04.215126   23130 addons.go:234] Setting addon inspektor-gadget=true in "addons-990895"
	I0719 18:16:04.215140   23130 host.go:66] Checking if "addons-990895" exists ...
	I0719 18:16:04.214883   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.215191   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.215005   23130 addons.go:234] Setting addon volumesnapshots=true in "addons-990895"
	I0719 18:16:04.215493   23130 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-990895"
	I0719 18:16:04.215702   23130 host.go:66] Checking if "addons-990895" exists ...
	I0719 18:16:04.215749   23130 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-990895"
	I0719 18:16:04.215791   23130 host.go:66] Checking if "addons-990895" exists ...
	I0719 18:16:04.214389   23130 host.go:66] Checking if "addons-990895" exists ...
	I0719 18:16:04.214398   23130 addons.go:69] Setting storage-provisioner=true in profile "addons-990895"
	I0719 18:16:04.216057   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.216078   23130 addons.go:234] Setting addon storage-provisioner=true in "addons-990895"
	I0719 18:16:04.216082   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.216111   23130 host.go:66] Checking if "addons-990895" exists ...
	I0719 18:16:04.215702   23130 host.go:66] Checking if "addons-990895" exists ...
	I0719 18:16:04.216210   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.216227   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.216245   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.216258   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.215024   23130 addons.go:69] Setting ingress=true in profile "addons-990895"
	I0719 18:16:04.215058   23130 host.go:66] Checking if "addons-990895" exists ...
	I0719 18:16:04.227003   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.227027   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.231222   23130 out.go:177] * Verifying Kubernetes components...
	I0719 18:16:04.231417   23130 addons.go:234] Setting addon ingress=true in "addons-990895"
	I0719 18:16:04.215131   23130 addons.go:234] Setting addon ingress-dns=true in "addons-990895"
	I0719 18:16:04.231484   23130 host.go:66] Checking if "addons-990895" exists ...
	I0719 18:16:04.231534   23130 host.go:66] Checking if "addons-990895" exists ...
	I0719 18:16:04.231694   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.231731   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.232020   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.232046   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.232070   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.232091   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.214389   23130 host.go:66] Checking if "addons-990895" exists ...
	I0719 18:16:04.233595   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.233643   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.234643   23130 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 18:16:04.215482   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.234766   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.240119   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37631
	I0719 18:16:04.240153   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37139
	I0719 18:16:04.240195   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35653
	I0719 18:16:04.240120   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45527
	I0719 18:16:04.215016   23130 addons.go:69] Setting volcano=true in profile "addons-990895"
	I0719 18:16:04.240435   23130 addons.go:234] Setting addon volcano=true in "addons-990895"
	I0719 18:16:04.240508   23130 host.go:66] Checking if "addons-990895" exists ...
	I0719 18:16:04.240665   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.240714   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.241061   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.241103   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.241880   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.241901   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.241951   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.242038   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.242148   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.242158   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.242578   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.242596   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.242731   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.242829   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.242849   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.242923   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:16:04.242974   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.243115   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.243170   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.243218   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:16:04.243381   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:16:04.245183   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.245210   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.246016   23130 host.go:66] Checking if "addons-990895" exists ...
	I0719 18:16:04.246478   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.246523   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.254589   23130 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-990895"
	I0719 18:16:04.254652   23130 host.go:66] Checking if "addons-990895" exists ...
	I0719 18:16:04.255103   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.255148   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.261302   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.262759   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42359
	I0719 18:16:04.262912   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42005
	I0719 18:16:04.264433   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.266387   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42257
	I0719 18:16:04.280774   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34743
	I0719 18:16:04.281368   23130 addons.go:234] Setting addon default-storageclass=true in "addons-990895"
	I0719 18:16:04.281404   23130 host.go:66] Checking if "addons-990895" exists ...
	I0719 18:16:04.281484   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.281817   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.281842   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.282245   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.282263   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.282319   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.282318   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.282462   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.282520   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40647
	I0719 18:16:04.282624   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45247
	I0719 18:16:04.282766   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.282907   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.282918   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.283230   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.283303   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.283319   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.283542   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.283881   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.283903   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.284034   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.284043   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.284160   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.284198   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.284203   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.284408   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.284869   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.284906   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.288607   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45675
	I0719 18:16:04.288633   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.288683   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.288699   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.289057   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.289123   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.289201   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.289252   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.289375   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.289487   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.289500   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.289585   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.289605   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.289817   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36591
	I0719 18:16:04.290212   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.290426   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.290920   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.290949   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.291179   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.291191   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.291302   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.291504   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.291682   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:16:04.291722   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44207
	I0719 18:16:04.291873   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.292240   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.292310   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.292350   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.293226   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.293276   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.293866   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.293931   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37623
	I0719 18:16:04.294401   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.294429   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.294661   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:16:04.295337   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.295931   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.295977   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.296328   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.296712   23130 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0719 18:16:04.297029   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.297085   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.298194   23130 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0719 18:16:04.298210   23130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0719 18:16:04.298227   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:16:04.301558   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.301976   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:16:04.302009   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.302215   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:16:04.302352   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:16:04.302445   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:16:04.302533   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	I0719 18:16:04.310105   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41093
	I0719 18:16:04.310700   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36453
	I0719 18:16:04.310871   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.311376   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.311398   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.311717   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.311900   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:16:04.312123   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.312702   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.312756   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.313110   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.313481   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36705
	I0719 18:16:04.313658   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:16:04.313815   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.313881   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.313962   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.314468   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.314490   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.314897   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.315289   23130 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0719 18:16:04.315475   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.315494   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.316787   23130 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0719 18:16:04.316811   23130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0719 18:16:04.316826   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:16:04.318246   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40413
	I0719 18:16:04.318779   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.319239   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.319256   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.319595   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.319777   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:16:04.320271   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.320976   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:16:04.321370   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.321725   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:16:04.321799   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:16:04.322056   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:16:04.322199   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:16:04.322337   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	I0719 18:16:04.323585   23130 out.go:177]   - Using image docker.io/registry:2.8.3
	I0719 18:16:04.324696   23130 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0719 18:16:04.325189   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41469
	I0719 18:16:04.325700   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.325900   23130 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0719 18:16:04.325915   23130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0719 18:16:04.325931   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:16:04.326421   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.326439   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.327175   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.327884   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:16:04.329679   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:16:04.329743   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.329769   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:16:04.329789   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.330155   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:16:04.330328   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:16:04.330474   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	I0719 18:16:04.334231   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36695
	I0719 18:16:04.334692   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.335213   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.335235   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.335571   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.335732   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:16:04.337179   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:16:04.338583   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45157
	I0719 18:16:04.338916   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.339039   23130 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0719 18:16:04.339402   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.339419   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.339728   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.339929   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:16:04.340917   23130 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 18:16:04.340937   23130 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 18:16:04.340956   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:16:04.341730   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46009
	I0719 18:16:04.342037   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.342513   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.342529   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.342599   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33283
	I0719 18:16:04.343096   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.343524   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.343538   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.343860   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.344214   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.344233   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.344700   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46869
	I0719 18:16:04.344958   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.345013   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35705
	I0719 18:16:04.345103   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.345540   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.345556   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.345564   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.345567   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.345993   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.346161   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.346320   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:16:04.346551   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.346576   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.346632   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:16:04.346968   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.347192   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:16:04.347045   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.347445   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:16:04.347481   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.347818   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:16:04.348016   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:16:04.348226   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:16:04.348337   23130 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0719 18:16:04.348406   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	I0719 18:16:04.348874   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:16:04.349679   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:16:04.349870   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:04.349890   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:04.350173   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:04.350184   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:04.350194   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:04.350201   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:04.350470   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:04.350515   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:04.350525   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:04.350533   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	W0719 18:16:04.350602   23130 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0719 18:16:04.350819   23130 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0719 18:16:04.350819   23130 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0719 18:16:04.351599   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38447
	I0719 18:16:04.352945   23130 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0719 18:16:04.352961   23130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0719 18:16:04.352978   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:16:04.353782   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35361
	I0719 18:16:04.354320   23130 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0719 18:16:04.354855   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.355415   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.355792   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.355809   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.356003   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.356023   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.356330   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.356377   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.356514   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.356611   23130 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0719 18:16:04.356731   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:16:04.356793   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:16:04.357309   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.357227   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:04.357348   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:04.357251   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:16:04.357720   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:16:04.357843   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:16:04.357959   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	I0719 18:16:04.358210   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39687
	I0719 18:16:04.358547   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.358673   23130 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0719 18:16:04.358986   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.358996   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.359280   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.359419   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:16:04.361107   23130 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0719 18:16:04.361261   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:16:04.362071   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:16:04.362135   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35949
	I0719 18:16:04.362617   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.363071   23130 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 18:16:04.363092   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.363484   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.363750   23130 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0719 18:16:04.363776   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.363811   23130 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0719 18:16:04.364039   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:16:04.364956   23130 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 18:16:04.364972   23130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 18:16:04.364989   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:16:04.365087   23130 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0719 18:16:04.365098   23130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0719 18:16:04.365111   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:16:04.365859   23130 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0719 18:16:04.366689   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36925
	I0719 18:16:04.366909   23130 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0719 18:16:04.366924   23130 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0719 18:16:04.366940   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:16:04.367976   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.368670   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.368686   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.369105   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.369307   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:16:04.369487   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:16:04.370061   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35451
	I0719 18:16:04.370427   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.370740   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.371096   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.371111   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.371275   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:16:04.371303   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.371464   23130 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0719 18:16:04.371494   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.371548   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:16:04.371885   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:16:04.371921   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:16:04.372090   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:16:04.372145   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:16:04.372468   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	I0719 18:16:04.372767   23130 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0719 18:16:04.372780   23130 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0719 18:16:04.372804   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:16:04.373208   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43769
	I0719 18:16:04.373598   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:16:04.373707   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.373896   23130 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 18:16:04.373907   23130 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 18:16:04.373921   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:16:04.374011   23130 out.go:177]   - Using image docker.io/busybox:stable
	I0719 18:16:04.374913   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.374930   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.375775   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.376147   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:16:04.376212   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.376420   23130 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0719 18:16:04.376833   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:16:04.376852   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.376988   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:16:04.377247   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38251
	I0719 18:16:04.377372   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.377423   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.377483   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:16:04.377614   23130 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0719 18:16:04.377632   23130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0719 18:16:04.377648   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:16:04.377664   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:16:04.377851   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	I0719 18:16:04.378127   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.378155   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:16:04.378169   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.378202   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.378674   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.378697   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.378743   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:16:04.378760   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.378764   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:16:04.378905   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:16:04.378964   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:16:04.378982   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.379178   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:16:04.379317   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:16:04.379375   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:16:04.379558   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	I0719 18:16:04.379804   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:16:04.379810   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:16:04.379870   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.380040   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:16:04.380247   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:16:04.380295   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:16:04.380344   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:16:04.380428   23130 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0719 18:16:04.380598   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	I0719 18:16:04.380855   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	I0719 18:16:04.381248   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.381636   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:16:04.381654   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.381768   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:16:04.381900   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:16:04.381954   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:16:04.382128   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:16:04.382291   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	I0719 18:16:04.382841   23130 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0719 18:16:04.383539   23130 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0719 18:16:04.384895   23130 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0719 18:16:04.384895   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33247
	I0719 18:16:04.384968   23130 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0719 18:16:04.384985   23130 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0719 18:16:04.385003   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:16:04.385828   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:04.386295   23130 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0719 18:16:04.386309   23130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0719 18:16:04.386324   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:16:04.386324   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:04.386347   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:04.386725   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:04.386873   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:16:04.388331   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:16:04.388480   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.388825   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:16:04.388851   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.388984   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:16:04.389131   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:16:04.389248   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:16:04.389401   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	I0719 18:16:04.389847   23130 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0719 18:16:04.390208   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.390749   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:16:04.390774   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.390915   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:16:04.391072   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:16:04.391121   23130 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0719 18:16:04.391132   23130 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0719 18:16:04.391145   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:16:04.391192   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:16:04.391337   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	I0719 18:16:04.393528   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.393826   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:16:04.393850   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:04.393960   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:16:04.394114   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:16:04.394239   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:16:04.394358   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	W0719 18:16:04.416108   23130 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:39062->192.168.39.239:22: read: connection reset by peer
	I0719 18:16:04.416159   23130 retry.go:31] will retry after 361.4719ms: ssh: handshake failed: read tcp 192.168.39.1:39062->192.168.39.239:22: read: connection reset by peer
	I0719 18:16:04.626774   23130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0719 18:16:04.626897   23130 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 18:16:04.630972   23130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0719 18:16:04.713646   23130 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0719 18:16:04.713667   23130 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0719 18:16:04.724415   23130 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0719 18:16:04.724436   23130 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0719 18:16:04.779865   23130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0719 18:16:04.797110   23130 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0719 18:16:04.797140   23130 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0719 18:16:04.800986   23130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0719 18:16:04.824085   23130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 18:16:04.838707   23130 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0719 18:16:04.838730   23130 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0719 18:16:04.841405   23130 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0719 18:16:04.841421   23130 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0719 18:16:04.848239   23130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0719 18:16:04.852566   23130 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0719 18:16:04.852594   23130 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0719 18:16:04.861165   23130 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 18:16:04.861186   23130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0719 18:16:04.867479   23130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 18:16:04.877043   23130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0719 18:16:04.890610   23130 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0719 18:16:04.890632   23130 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0719 18:16:04.964973   23130 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0719 18:16:04.965005   23130 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0719 18:16:04.970390   23130 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0719 18:16:04.970409   23130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0719 18:16:04.995412   23130 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0719 18:16:04.995437   23130 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0719 18:16:04.996525   23130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0719 18:16:05.015066   23130 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 18:16:05.015090   23130 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 18:16:05.023595   23130 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0719 18:16:05.023618   23130 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0719 18:16:05.113664   23130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0719 18:16:05.125737   23130 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0719 18:16:05.125768   23130 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0719 18:16:05.126857   23130 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0719 18:16:05.126877   23130 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0719 18:16:05.166847   23130 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 18:16:05.166877   23130 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 18:16:05.191740   23130 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0719 18:16:05.191765   23130 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0719 18:16:05.283087   23130 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0719 18:16:05.283113   23130 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0719 18:16:05.303323   23130 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0719 18:16:05.303357   23130 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0719 18:16:05.328468   23130 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0719 18:16:05.328497   23130 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0719 18:16:05.363121   23130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 18:16:05.402045   23130 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0719 18:16:05.402070   23130 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0719 18:16:05.423264   23130 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0719 18:16:05.423287   23130 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0719 18:16:05.451822   23130 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0719 18:16:05.451856   23130 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0719 18:16:05.467430   23130 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0719 18:16:05.467456   23130 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0719 18:16:05.564649   23130 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 18:16:05.564667   23130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0719 18:16:05.576712   23130 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0719 18:16:05.576730   23130 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0719 18:16:05.645465   23130 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0719 18:16:05.645485   23130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0719 18:16:05.695565   23130 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0719 18:16:05.695588   23130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0719 18:16:05.818884   23130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 18:16:05.854288   23130 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0719 18:16:05.854309   23130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0719 18:16:05.867893   23130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0719 18:16:05.874861   23130 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0719 18:16:05.874889   23130 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0719 18:16:06.036660   23130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0719 18:16:06.081365   23130 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0719 18:16:06.081390   23130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0719 18:16:06.097823   23130 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.470896921s)
	I0719 18:16:06.097823   23130 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.471000797s)
	I0719 18:16:06.097948   23130 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0719 18:16:06.098794   23130 node_ready.go:35] waiting up to 6m0s for node "addons-990895" to be "Ready" ...
	I0719 18:16:06.102699   23130 node_ready.go:49] node "addons-990895" has status "Ready":"True"
	I0719 18:16:06.102718   23130 node_ready.go:38] duration metric: took 3.875988ms for node "addons-990895" to be "Ready" ...
	I0719 18:16:06.102725   23130 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 18:16:06.120945   23130 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jwrch" in "kube-system" namespace to be "Ready" ...
	I0719 18:16:06.380801   23130 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0719 18:16:06.380828   23130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0719 18:16:06.601963   23130 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-990895" context rescaled to 1 replicas
	I0719 18:16:06.641587   23130 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0719 18:16:06.641613   23130 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0719 18:16:06.787999   23130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.156992425s)
	I0719 18:16:06.788059   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:06.788070   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:06.788408   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:06.788451   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:06.788468   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:06.788481   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:06.788491   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:06.788763   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:06.788780   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:06.788792   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:06.826901   23130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0719 18:16:08.254304   23130 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwrch" in "kube-system" namespace has status "Ready":"False"
	I0719 18:16:08.982958   23130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.203060529s)
	I0719 18:16:08.982979   23130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.181964227s)
	I0719 18:16:08.983007   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:08.983020   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:08.983020   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:08.983033   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:08.983052   23130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.158938631s)
	I0719 18:16:08.983083   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:08.983099   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:08.983129   23130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.134867104s)
	I0719 18:16:08.983160   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:08.983173   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:08.983191   23130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.11569029s)
	I0719 18:16:08.983221   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:08.983232   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:08.983624   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:08.983640   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:08.983650   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:08.983659   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:08.983756   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:08.983783   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:08.983791   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:08.983799   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:08.983806   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:08.983906   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:08.983943   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:08.983959   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:08.983976   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:08.983993   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:08.984061   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:08.984071   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:08.984078   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:08.984085   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:08.984142   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:08.984180   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:08.984197   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:08.984214   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:08.984229   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:08.984579   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:08.984600   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:08.984878   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:08.984933   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:08.984951   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:08.985578   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:08.985609   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:08.985616   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:08.985788   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:08.985821   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:08.985868   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:08.985850   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:08.985927   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:09.082357   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:09.082382   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:09.082728   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:09.082747   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:09.082754   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	W0719 18:16:09.082841   23130 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0719 18:16:09.101312   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:09.101337   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:09.101749   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:09.101757   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:09.101770   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:10.632000   23130 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwrch" in "kube-system" namespace has status "Ready":"False"
	I0719 18:16:11.378301   23130 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0719 18:16:11.378334   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:16:11.381815   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:11.382226   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:16:11.382248   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:11.382500   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:16:11.382698   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:16:11.382880   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:16:11.383026   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	I0719 18:16:11.574790   23130 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0719 18:16:11.670857   23130 addons.go:234] Setting addon gcp-auth=true in "addons-990895"
	I0719 18:16:11.670922   23130 host.go:66] Checking if "addons-990895" exists ...
	I0719 18:16:11.671317   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:11.671345   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:11.686902   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40097
	I0719 18:16:11.687296   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:11.687759   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:11.687782   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:11.688104   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:11.688578   23130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:16:11.688604   23130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:16:11.702757   23130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41719
	I0719 18:16:11.703200   23130 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:16:11.703774   23130 main.go:141] libmachine: Using API Version  1
	I0719 18:16:11.703793   23130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:16:11.704124   23130 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:16:11.704446   23130 main.go:141] libmachine: (addons-990895) Calling .GetState
	I0719 18:16:11.706060   23130 main.go:141] libmachine: (addons-990895) Calling .DriverName
	I0719 18:16:11.706266   23130 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0719 18:16:11.706285   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHHostname
	I0719 18:16:11.709185   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:11.709648   23130 main.go:141] libmachine: (addons-990895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:90:82", ip: ""} in network mk-addons-990895: {Iface:virbr1 ExpiryTime:2024-07-19 19:15:26 +0000 UTC Type:0 Mac:52:54:00:15:90:82 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-990895 Clientid:01:52:54:00:15:90:82}
	I0719 18:16:11.709675   23130 main.go:141] libmachine: (addons-990895) DBG | domain addons-990895 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:90:82 in network mk-addons-990895
	I0719 18:16:11.709827   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHPort
	I0719 18:16:11.709995   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHKeyPath
	I0719 18:16:11.710173   23130 main.go:141] libmachine: (addons-990895) Calling .GetSSHUsername
	I0719 18:16:11.710365   23130 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/addons-990895/id_rsa Username:docker}
	I0719 18:16:12.304965   23130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.427887631s)
	I0719 18:16:12.305017   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:12.305019   23130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.308470617s)
	I0719 18:16:12.305028   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:12.305041   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:12.305053   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:12.305097   23130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.191394157s)
	I0719 18:16:12.305144   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:12.305157   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:12.305155   23130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.942005141s)
	I0719 18:16:12.305180   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:12.305189   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:12.305253   23130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.486339917s)
	W0719 18:16:12.305280   23130 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0719 18:16:12.305281   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:12.305296   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:12.305310   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:12.305320   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:12.305317   23130 retry.go:31] will retry after 248.791756ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0719 18:16:12.305394   23130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.437470306s)
	I0719 18:16:12.305420   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:12.305436   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:12.305469   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:12.305489   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:12.305498   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:12.305506   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:12.305511   23130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.268821522s)
	I0719 18:16:12.305526   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:12.305533   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:12.305601   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:12.305632   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:12.305640   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:12.305642   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:12.305648   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:12.305659   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:12.305711   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:12.305718   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:12.305727   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:12.305733   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:12.305794   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:12.305803   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:12.305813   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:12.305812   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:12.305820   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:12.305868   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:12.305889   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:12.305890   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:12.305898   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:12.305908   23130 addons.go:475] Verifying addon ingress=true in "addons-990895"
	I0719 18:16:12.305911   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:12.305917   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:12.306026   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:12.306035   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:12.306062   23130 addons.go:475] Verifying addon metrics-server=true in "addons-990895"
	I0719 18:16:12.306114   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:12.306231   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:12.306239   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:12.306005   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:12.306320   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:12.306350   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:12.306356   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:12.306364   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:12.306370   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:12.307388   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:12.307416   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:12.307423   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:12.307431   23130 addons.go:475] Verifying addon registry=true in "addons-990895"
	I0719 18:16:12.308487   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:12.308512   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:12.308519   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:12.309678   23130 out.go:177] * Verifying ingress addon...
	I0719 18:16:12.309705   23130 out.go:177] * Verifying registry addon...
	I0719 18:16:12.309710   23130 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-990895 service yakd-dashboard -n yakd-dashboard
	
	I0719 18:16:12.311673   23130 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0719 18:16:12.311863   23130 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0719 18:16:12.319304   23130 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0719 18:16:12.319327   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:12.319368   23130 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0719 18:16:12.319383   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:12.555314   23130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 18:16:12.644659   23130 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwrch" in "kube-system" namespace has status "Ready":"False"
	I0719 18:16:12.824859   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:12.842719   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:12.852759   23130 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.146471529s)
	I0719 18:16:12.852812   23130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.025865762s)
	I0719 18:16:12.852854   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:12.852866   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:12.853103   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:12.853120   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:12.853130   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:12.853138   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:12.853425   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:12.853459   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:12.853472   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:12.853484   23130 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-990895"
	I0719 18:16:12.854245   23130 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0719 18:16:12.855081   23130 out.go:177] * Verifying csi-hostpath-driver addon...
	I0719 18:16:12.856574   23130 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0719 18:16:12.857251   23130 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0719 18:16:12.857823   23130 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0719 18:16:12.857837   23130 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0719 18:16:12.886317   23130 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0719 18:16:12.886345   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:12.956078   23130 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0719 18:16:12.956100   23130 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0719 18:16:13.011490   23130 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0719 18:16:13.011513   23130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0719 18:16:13.060502   23130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0719 18:16:13.321141   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:13.322776   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:13.368340   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:13.817001   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:13.819914   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:13.862984   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:13.952016   23130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.396649876s)
	I0719 18:16:13.952076   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:13.952093   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:13.952391   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:13.952414   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:13.952425   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:13.952435   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:13.952440   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:13.952696   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:13.952715   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:14.355904   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:14.399725   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:14.400325   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:14.541439   23130 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.48088954s)
	I0719 18:16:14.541497   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:14.541514   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:14.541837   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:14.541896   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:14.541909   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:14.541921   23130 main.go:141] libmachine: Making call to close driver server
	I0719 18:16:14.541931   23130 main.go:141] libmachine: (addons-990895) Calling .Close
	I0719 18:16:14.542148   23130 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:16:14.542165   23130 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:16:14.542193   23130 main.go:141] libmachine: (addons-990895) DBG | Closing plugin on server side
	I0719 18:16:14.543406   23130 addons.go:475] Verifying addon gcp-auth=true in "addons-990895"
	I0719 18:16:14.545780   23130 out.go:177] * Verifying gcp-auth addon...
	I0719 18:16:14.547531   23130 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0719 18:16:14.559467   23130 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0719 18:16:14.559483   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:14.819034   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:14.819705   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:14.864931   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:15.052919   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:15.128652   23130 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwrch" in "kube-system" namespace has status "Ready":"False"
	I0719 18:16:15.317950   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:15.318755   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:15.362545   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:15.551707   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:15.816832   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:15.816954   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:15.862023   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:16.051431   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:16.317449   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:16.318272   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:16.362507   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:16.551610   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:16.817007   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:16.817090   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:16.862740   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:17.050818   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:17.316273   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:17.316837   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:17.363280   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:17.552315   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:17.627357   23130 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwrch" in "kube-system" namespace has status "Ready":"False"
	I0719 18:16:17.816631   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:17.816704   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:17.865746   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:18.051216   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:18.317681   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:18.318498   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:18.363119   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:18.551646   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:18.816012   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:18.816953   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:18.862656   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:19.050697   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:19.317201   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:19.317500   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:19.362883   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:19.552847   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:19.819909   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:19.819938   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:19.866258   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:20.050748   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:20.130088   23130 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwrch" in "kube-system" namespace has status "Ready":"False"
	I0719 18:16:20.318306   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:20.320886   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:20.362123   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:20.551727   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:20.817251   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:20.817734   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:20.863089   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:21.052141   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:21.317551   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:21.319537   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:21.364102   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:21.550903   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:21.816558   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:21.817139   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:21.862741   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:22.051238   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:22.316224   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:22.317481   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:22.362765   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:22.551985   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:22.629159   23130 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwrch" in "kube-system" namespace has status "Ready":"False"
	I0719 18:16:22.816837   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:22.816999   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:22.862454   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:23.050790   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:23.316818   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:23.317166   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:23.363018   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:23.551044   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:23.817713   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:23.821090   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:23.862480   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:24.050929   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:24.317208   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:24.317429   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:24.362442   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:24.551587   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:24.816854   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:24.816888   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:24.862528   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:25.051911   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:25.127502   23130 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwrch" in "kube-system" namespace has status "Ready":"False"
	I0719 18:16:25.316959   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:25.317349   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:25.363011   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:25.551190   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:25.816951   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:25.817523   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:25.862334   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:26.051678   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:26.316466   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:26.316468   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:26.362638   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:26.551088   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:26.816198   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:26.816473   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:26.862602   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:27.051163   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:27.316761   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:27.316838   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:27.362408   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:27.551767   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:27.626969   23130 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwrch" in "kube-system" namespace has status "Ready":"False"
	I0719 18:16:27.816479   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:27.817759   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:27.861832   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:28.051306   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:28.316938   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:28.317478   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:28.362880   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:28.551629   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:28.816939   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:28.817222   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:28.862101   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:29.052315   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:29.317760   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:29.319395   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:29.361885   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:29.551177   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:29.816222   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:29.816333   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:29.863418   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:30.051621   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:30.126982   23130 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwrch" in "kube-system" namespace has status "Ready":"False"
	I0719 18:16:30.316869   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:30.317205   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:30.370889   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:30.551527   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:30.816461   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:30.818197   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:30.863576   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:31.051275   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:31.320281   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:31.325063   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:31.362665   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:31.551147   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:31.817499   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:31.819075   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:31.863068   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:32.051522   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:32.319684   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:32.320106   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:32.365589   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:32.550829   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:32.627428   23130 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwrch" in "kube-system" namespace has status "Ready":"False"
	I0719 18:16:32.817760   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:32.817952   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:32.862024   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:33.051684   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:33.317900   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:33.320593   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:33.363423   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:33.551381   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:33.817770   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:33.818044   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:33.865768   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:34.051190   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:34.316611   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:34.316611   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:34.366207   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:34.551421   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:34.817136   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:34.817233   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:35.150325   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:35.152471   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:35.153818   23130 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwrch" in "kube-system" namespace has status "Ready":"False"
	I0719 18:16:35.316655   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:35.318319   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:35.363080   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:35.551140   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:35.816891   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:35.817029   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:35.862820   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:36.051340   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:36.317547   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:36.318161   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:36.362339   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:36.551499   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:36.816522   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:36.816602   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:36.862771   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:37.051363   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:37.317264   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:37.317286   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:37.362758   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:37.551024   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:37.632369   23130 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwrch" in "kube-system" namespace has status "Ready":"False"
	I0719 18:16:37.816376   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:37.816658   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:37.861934   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:38.051585   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:38.317147   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:38.318645   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:38.363070   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:38.551463   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:38.816947   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:38.818274   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:38.863018   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:39.051203   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:39.316455   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:39.317389   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:39.362301   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:39.551334   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:39.818326   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:39.822073   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:39.862734   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:40.051342   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:40.126991   23130 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwrch" in "kube-system" namespace has status "Ready":"False"
	I0719 18:16:40.316712   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:40.317266   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:40.362875   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:40.551394   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:40.816524   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:40.818270   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:40.862767   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:41.051712   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:41.315730   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:41.316462   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:41.362509   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:41.552060   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:42.022090   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:42.022982   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:42.024016   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:42.051048   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:42.128190   23130 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwrch" in "kube-system" namespace has status "Ready":"False"
	I0719 18:16:42.316921   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:42.318160   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:42.362938   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:42.550980   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:42.816769   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:42.818187   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:42.863224   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:43.051179   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:43.316249   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:43.317510   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:43.363365   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:43.551675   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:43.818606   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:43.819189   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:43.863922   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:44.050856   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:44.323732   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:44.323787   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:44.369697   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:44.551391   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:44.626626   23130 pod_ready.go:102] pod "coredns-7db6d8ff4d-jwrch" in "kube-system" namespace has status "Ready":"False"
	I0719 18:16:44.817539   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:44.818730   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:44.862926   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:45.056815   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:45.317905   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:45.318222   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:45.362315   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:45.551082   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:45.817397   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:45.818038   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:45.867568   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:46.055468   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:46.126800   23130 pod_ready.go:92] pod "coredns-7db6d8ff4d-jwrch" in "kube-system" namespace has status "Ready":"True"
	I0719 18:16:46.126820   23130 pod_ready.go:81] duration metric: took 40.005850885s for pod "coredns-7db6d8ff4d-jwrch" in "kube-system" namespace to be "Ready" ...
	I0719 18:16:46.126830   23130 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vcfkz" in "kube-system" namespace to be "Ready" ...
	I0719 18:16:46.128711   23130 pod_ready.go:97] error getting pod "coredns-7db6d8ff4d-vcfkz" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-vcfkz" not found
	I0719 18:16:46.128732   23130 pod_ready.go:81] duration metric: took 1.895882ms for pod "coredns-7db6d8ff4d-vcfkz" in "kube-system" namespace to be "Ready" ...
	E0719 18:16:46.128740   23130 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-7db6d8ff4d-vcfkz" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-vcfkz" not found
	I0719 18:16:46.128746   23130 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-990895" in "kube-system" namespace to be "Ready" ...
	I0719 18:16:46.134823   23130 pod_ready.go:92] pod "etcd-addons-990895" in "kube-system" namespace has status "Ready":"True"
	I0719 18:16:46.134849   23130 pod_ready.go:81] duration metric: took 6.095984ms for pod "etcd-addons-990895" in "kube-system" namespace to be "Ready" ...
	I0719 18:16:46.134861   23130 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-990895" in "kube-system" namespace to be "Ready" ...
	I0719 18:16:46.139198   23130 pod_ready.go:92] pod "kube-apiserver-addons-990895" in "kube-system" namespace has status "Ready":"True"
	I0719 18:16:46.139213   23130 pod_ready.go:81] duration metric: took 4.344392ms for pod "kube-apiserver-addons-990895" in "kube-system" namespace to be "Ready" ...
	I0719 18:16:46.139221   23130 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-990895" in "kube-system" namespace to be "Ready" ...
	I0719 18:16:46.143467   23130 pod_ready.go:92] pod "kube-controller-manager-addons-990895" in "kube-system" namespace has status "Ready":"True"
	I0719 18:16:46.143487   23130 pod_ready.go:81] duration metric: took 4.260676ms for pod "kube-controller-manager-addons-990895" in "kube-system" namespace to be "Ready" ...
	I0719 18:16:46.143496   23130 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7p5fz" in "kube-system" namespace to be "Ready" ...
	I0719 18:16:46.318373   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:46.318566   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:46.325062   23130 pod_ready.go:92] pod "kube-proxy-7p5fz" in "kube-system" namespace has status "Ready":"True"
	I0719 18:16:46.325079   23130 pod_ready.go:81] duration metric: took 181.577719ms for pod "kube-proxy-7p5fz" in "kube-system" namespace to be "Ready" ...
	I0719 18:16:46.325088   23130 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-990895" in "kube-system" namespace to be "Ready" ...
	I0719 18:16:46.362886   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:46.551445   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:46.724931   23130 pod_ready.go:92] pod "kube-scheduler-addons-990895" in "kube-system" namespace has status "Ready":"True"
	I0719 18:16:46.724961   23130 pod_ready.go:81] duration metric: took 399.865474ms for pod "kube-scheduler-addons-990895" in "kube-system" namespace to be "Ready" ...
	I0719 18:16:46.724972   23130 pod_ready.go:38] duration metric: took 40.622235448s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 18:16:46.724994   23130 api_server.go:52] waiting for apiserver process to appear ...
	I0719 18:16:46.725057   23130 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 18:16:46.757121   23130 api_server.go:72] duration metric: took 42.542967215s to wait for apiserver process to appear ...
	I0719 18:16:46.757146   23130 api_server.go:88] waiting for apiserver healthz status ...
	I0719 18:16:46.757167   23130 api_server.go:253] Checking apiserver healthz at https://192.168.39.239:8443/healthz ...
	I0719 18:16:46.760999   23130 api_server.go:279] https://192.168.39.239:8443/healthz returned 200:
	ok
	I0719 18:16:46.762005   23130 api_server.go:141] control plane version: v1.30.3
	I0719 18:16:46.762025   23130 api_server.go:131] duration metric: took 4.872839ms to wait for apiserver health ...
	I0719 18:16:46.762032   23130 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 18:16:46.819220   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:46.821402   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:46.881610   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:46.933743   23130 system_pods.go:59] 18 kube-system pods found
	I0719 18:16:46.933771   23130 system_pods.go:61] "coredns-7db6d8ff4d-jwrch" [3a132301-51a7-481d-8442-0d5c256608bd] Running
	I0719 18:16:46.933778   23130 system_pods.go:61] "csi-hostpath-attacher-0" [6041967e-faa9-4589-8fba-2d23c0bf4ed8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0719 18:16:46.933785   23130 system_pods.go:61] "csi-hostpath-resizer-0" [a1ef532c-9919-4a10-8bf3-99c69ebdcf31] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0719 18:16:46.933792   23130 system_pods.go:61] "csi-hostpathplugin-lljpc" [d10c0530-0314-425d-a850-6077fe481809] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0719 18:16:46.933797   23130 system_pods.go:61] "etcd-addons-990895" [d979a63a-0e7b-4787-9bde-978357bb3d20] Running
	I0719 18:16:46.933801   23130 system_pods.go:61] "kube-apiserver-addons-990895" [1e1c5d53-2489-4dfc-a17d-796526db2aa5] Running
	I0719 18:16:46.933805   23130 system_pods.go:61] "kube-controller-manager-addons-990895" [899afe35-74d9-4bae-b841-498dba02e08d] Running
	I0719 18:16:46.933809   23130 system_pods.go:61] "kube-ingress-dns-minikube" [396f1196-8e67-41ea-b4d4-86b6c843c63e] Running
	I0719 18:16:46.933812   23130 system_pods.go:61] "kube-proxy-7p5fz" [480bf6bd-8a9c-4e93-aa4e-1af321bcf057] Running
	I0719 18:16:46.933815   23130 system_pods.go:61] "kube-scheduler-addons-990895" [81fedc77-3d19-4c5e-bb4e-3503d864f120] Running
	I0719 18:16:46.933820   23130 system_pods.go:61] "metrics-server-c59844bb4-mg572" [15b48dfb-deb0-43b5-9a04-9d6896e8c5da] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 18:16:46.933829   23130 system_pods.go:61] "nvidia-device-plugin-daemonset-dpnm9" [a47bd84e-c957-4484-ad2b-20f5a4582ed9] Running
	I0719 18:16:46.933835   23130 system_pods.go:61] "registry-656c9c8d9c-fvxpl" [db275727-fea8-4f8b-9836-c0f394f7e085] Running
	I0719 18:16:46.933839   23130 system_pods.go:61] "registry-proxy-5sm5z" [5e03528f-ac60-402a-8478-15cca52a26f9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0719 18:16:46.933847   23130 system_pods.go:61] "snapshot-controller-745499f584-nbp46" [d6ca72ac-b155-499f-9920-a996a141f321] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 18:16:46.933853   23130 system_pods.go:61] "snapshot-controller-745499f584-pcwbg" [dc8b6b85-3529-4e88-b820-30d093e5050c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 18:16:46.933856   23130 system_pods.go:61] "storage-provisioner" [18afc862-2719-4e26-be45-80599ec151ea] Running
	I0719 18:16:46.933860   23130 system_pods.go:61] "tiller-deploy-6677d64bcd-rgpbq" [e4c9ebcd-0308-48aa-ad08-d6e84e6dbbe3] Running
	I0719 18:16:46.933865   23130 system_pods.go:74] duration metric: took 171.827786ms to wait for pod list to return data ...
	I0719 18:16:46.933871   23130 default_sa.go:34] waiting for default service account to be created ...
	I0719 18:16:47.051234   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:47.124524   23130 default_sa.go:45] found service account: "default"
	I0719 18:16:47.124548   23130 default_sa.go:55] duration metric: took 190.67009ms for default service account to be created ...
	I0719 18:16:47.124564   23130 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 18:16:47.321951   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:47.322420   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:47.333952   23130 system_pods.go:86] 18 kube-system pods found
	I0719 18:16:47.333984   23130 system_pods.go:89] "coredns-7db6d8ff4d-jwrch" [3a132301-51a7-481d-8442-0d5c256608bd] Running
	I0719 18:16:47.333998   23130 system_pods.go:89] "csi-hostpath-attacher-0" [6041967e-faa9-4589-8fba-2d23c0bf4ed8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0719 18:16:47.334009   23130 system_pods.go:89] "csi-hostpath-resizer-0" [a1ef532c-9919-4a10-8bf3-99c69ebdcf31] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0719 18:16:47.334021   23130 system_pods.go:89] "csi-hostpathplugin-lljpc" [d10c0530-0314-425d-a850-6077fe481809] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0719 18:16:47.334033   23130 system_pods.go:89] "etcd-addons-990895" [d979a63a-0e7b-4787-9bde-978357bb3d20] Running
	I0719 18:16:47.334040   23130 system_pods.go:89] "kube-apiserver-addons-990895" [1e1c5d53-2489-4dfc-a17d-796526db2aa5] Running
	I0719 18:16:47.334050   23130 system_pods.go:89] "kube-controller-manager-addons-990895" [899afe35-74d9-4bae-b841-498dba02e08d] Running
	I0719 18:16:47.334061   23130 system_pods.go:89] "kube-ingress-dns-minikube" [396f1196-8e67-41ea-b4d4-86b6c843c63e] Running
	I0719 18:16:47.334069   23130 system_pods.go:89] "kube-proxy-7p5fz" [480bf6bd-8a9c-4e93-aa4e-1af321bcf057] Running
	I0719 18:16:47.334078   23130 system_pods.go:89] "kube-scheduler-addons-990895" [81fedc77-3d19-4c5e-bb4e-3503d864f120] Running
	I0719 18:16:47.334090   23130 system_pods.go:89] "metrics-server-c59844bb4-mg572" [15b48dfb-deb0-43b5-9a04-9d6896e8c5da] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 18:16:47.334100   23130 system_pods.go:89] "nvidia-device-plugin-daemonset-dpnm9" [a47bd84e-c957-4484-ad2b-20f5a4582ed9] Running
	I0719 18:16:47.334109   23130 system_pods.go:89] "registry-656c9c8d9c-fvxpl" [db275727-fea8-4f8b-9836-c0f394f7e085] Running
	I0719 18:16:47.334121   23130 system_pods.go:89] "registry-proxy-5sm5z" [5e03528f-ac60-402a-8478-15cca52a26f9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0719 18:16:47.334175   23130 system_pods.go:89] "snapshot-controller-745499f584-nbp46" [d6ca72ac-b155-499f-9920-a996a141f321] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 18:16:47.334194   23130 system_pods.go:89] "snapshot-controller-745499f584-pcwbg" [dc8b6b85-3529-4e88-b820-30d093e5050c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 18:16:47.334205   23130 system_pods.go:89] "storage-provisioner" [18afc862-2719-4e26-be45-80599ec151ea] Running
	I0719 18:16:47.334215   23130 system_pods.go:89] "tiller-deploy-6677d64bcd-rgpbq" [e4c9ebcd-0308-48aa-ad08-d6e84e6dbbe3] Running
	I0719 18:16:47.334226   23130 system_pods.go:126] duration metric: took 209.655247ms to wait for k8s-apps to be running ...
	I0719 18:16:47.334238   23130 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 18:16:47.334296   23130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 18:16:47.364748   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:47.365451   23130 system_svc.go:56] duration metric: took 31.208011ms WaitForService to wait for kubelet
	I0719 18:16:47.365476   23130 kubeadm.go:582] duration metric: took 43.151326989s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 18:16:47.365499   23130 node_conditions.go:102] verifying NodePressure condition ...
	I0719 18:16:47.525027   23130 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 18:16:47.525050   23130 node_conditions.go:123] node cpu capacity is 2
	I0719 18:16:47.525063   23130 node_conditions.go:105] duration metric: took 159.558548ms to run NodePressure ...
	I0719 18:16:47.525074   23130 start.go:241] waiting for startup goroutines ...
	I0719 18:16:47.551166   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:47.816974   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:47.817126   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:47.862582   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:48.051109   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:48.317707   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:48.317967   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:48.362535   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:48.550912   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:48.816751   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:48.816810   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:48.862255   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:49.051533   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:49.315971   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:49.316640   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:49.362933   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:49.551102   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:49.816474   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:49.816623   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:49.862237   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:50.051162   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:50.317616   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:50.318090   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:50.362291   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:50.550882   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:50.819527   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 18:16:50.826694   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:50.862295   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:51.050917   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:51.315650   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:51.317528   23130 kapi.go:107] duration metric: took 39.005853444s to wait for kubernetes.io/minikube-addons=registry ...
	I0719 18:16:51.362844   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:51.551376   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:51.819784   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:51.865606   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:52.052168   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:52.315614   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:52.362888   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:52.551777   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:52.817085   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:52.863078   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:53.050907   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:53.315642   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:53.364292   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:53.551695   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:53.816708   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:53.867211   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:54.051305   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:54.317137   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:54.362619   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:54.551409   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:54.820850   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:54.862972   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:55.053401   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:55.316366   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:55.361907   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:55.551198   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:55.815798   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:55.867517   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:56.051259   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:56.474900   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:56.476066   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:56.552329   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:56.816320   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:56.862825   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:57.051391   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:57.316136   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:57.362170   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:57.550211   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:57.816439   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:57.864045   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:58.051555   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:58.317307   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:58.362634   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:58.550751   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:58.817096   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:58.861992   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:59.052933   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:59.316913   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:59.362407   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:16:59.551066   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:16:59.815813   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:16:59.864947   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:00.051156   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:00.317112   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:00.365404   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:00.556563   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:00.816208   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:00.861862   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:01.051476   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:01.316120   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:01.365138   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:01.842896   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:01.843150   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:01.861889   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:02.051885   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:02.316764   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:02.362899   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:02.551205   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:02.815517   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:02.863494   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:03.051891   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:03.317335   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:03.362004   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:03.552279   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:03.823390   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:03.865216   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:04.053146   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:04.316535   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:04.363052   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:04.550853   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:04.816202   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:04.863145   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:05.051644   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:05.316428   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:05.362525   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:05.551458   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:05.816248   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:05.862272   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:06.050564   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:06.317699   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:06.362951   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:06.551383   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:06.959189   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:06.959940   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:07.051201   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:07.315815   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:07.365167   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:07.551168   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:07.816584   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:07.862836   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:08.051325   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:08.316523   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:08.363070   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:08.552118   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:08.816202   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:08.863609   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:09.051311   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:09.322797   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:09.362702   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:09.551528   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:09.816251   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:09.862348   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:10.051576   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:10.317168   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:10.362892   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:10.670783   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:10.816910   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:10.863055   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:11.051437   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:11.316141   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:11.363022   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:11.551619   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:11.816583   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:11.865847   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:12.051178   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:12.317752   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:12.362871   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:12.551186   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:12.817028   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:12.866106   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:13.051525   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:13.316669   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:13.362824   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:13.551310   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:13.816725   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:13.862882   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:14.051395   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:14.317129   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:14.365767   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:14.551146   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:14.816102   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:14.874682   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:15.051214   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:15.316329   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:15.369634   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:15.551508   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:15.816314   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:15.862510   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:16.050926   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:16.316680   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:16.364438   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:16.551919   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:16.818629   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:16.862974   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:17.051910   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:17.315599   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:17.363096   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:17.551823   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:17.816928   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:17.862047   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:18.052572   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:18.316517   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:18.362868   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:18.550480   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:18.817688   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:18.868338   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:19.051790   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:19.317032   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:19.363119   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:19.558055   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:20.031775   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:20.032793   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:20.052920   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:20.317123   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:20.363042   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:20.550647   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:20.816161   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:20.863152   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:21.051281   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:21.318518   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:21.362973   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:21.550899   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:21.817187   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:21.863639   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:22.051103   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:22.316328   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:22.362419   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:23.040992   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:23.044282   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:23.044495   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:23.054760   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:23.317688   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:23.361898   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:23.551279   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:23.816661   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:23.863224   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:24.052723   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:24.316358   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:24.371049   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:24.551179   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:24.816475   23130 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 18:17:24.863225   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:25.054597   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:25.316908   23130 kapi.go:107] duration metric: took 1m13.005042748s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0719 18:17:25.365061   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:25.551295   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:25.873747   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:26.455220   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:26.465831   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:26.550967   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:26.863271   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:27.051557   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:27.362976   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:27.552145   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:27.862719   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:28.054376   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:28.362643   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:28.550895   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:28.863403   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:29.051555   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:29.362330   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:29.550770   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:29.863014   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:30.051974   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 18:17:30.370707   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:30.551009   23130 kapi.go:107] duration metric: took 1m16.003475156s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0719 18:17:30.552884   23130 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-990895 cluster.
	I0719 18:17:30.554534   23130 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0719 18:17:30.555882   23130 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0719 18:17:30.863800   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:31.369378   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:31.868656   23130 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 18:17:32.363109   23130 kapi.go:107] duration metric: took 1m19.505854784s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0719 18:17:32.364719   23130 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, ingress-dns, nvidia-device-plugin, storage-provisioner-rancher, helm-tiller, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0719 18:17:32.365888   23130 addons.go:510] duration metric: took 1m28.151695498s for enable addons: enabled=[cloud-spanner storage-provisioner ingress-dns nvidia-device-plugin storage-provisioner-rancher helm-tiller metrics-server inspektor-gadget yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0719 18:17:32.365925   23130 start.go:246] waiting for cluster config update ...
	I0719 18:17:32.365947   23130 start.go:255] writing updated cluster config ...
	I0719 18:17:32.366180   23130 ssh_runner.go:195] Run: rm -f paused
	I0719 18:17:32.415428   23130 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 18:17:32.417272   23130 out.go:177] * Done! kubectl is now configured to use "addons-990895" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 19 18:23:06 addons-990895 crio[679]: time="2024-07-19 18:23:06.271232269Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721413386271204464,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580634,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c9831e1b-9552-4547-943f-db019f2123a5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 18:23:06 addons-990895 crio[679]: time="2024-07-19 18:23:06.271835328Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dcee9dce-56d6-42b7-8f20-05abcbbcd608 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:23:06 addons-990895 crio[679]: time="2024-07-19 18:23:06.271897421Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dcee9dce-56d6-42b7-8f20-05abcbbcd608 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:23:06 addons-990895 crio[679]: time="2024-07-19 18:23:06.274419885Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0256ab93b757ea6e892b9ca172de7bada767616432e55efa7865227d9ef55316,PodSandboxId:79621fa66f4186c950d43e109b2ad001e6ef61aa114479906017e52c2e3c4399,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721413228225885450,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-m4qdd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 26248856-8eff-4f35-a658-945273146c0f,},Annotations:map[string]string{io.kubernetes.container.hash: ca3bfd36,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:006cea3b2166cbab72fbc2418ee4eec7acb50c8499e6449507dc2817f66f2491,PodSandboxId:9d3e46f75e8d2ee7a4bff0c81dd78ec719415de2d262c96bf2cdb43bd02fecde,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721413086837852985,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65ee0f77-c5d6-40a1-9307-10d20caec993,},Annotations:map[string]string{io.kubernet
es.container.hash: 1c06ed37,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9830b71c2bcaf1ace67d64ad1ffae76d66b8a540e5c1cd447c9a1a459a942813,PodSandboxId:2c3a546e0577e211fc294882112576f3a7b4c0b26d4ab7e410e77c5d57e35c61,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721413067762223752,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-ggx6p,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 3efb6210-449e-46c2-b799-ac0a331ba22f,},Annotations:map[string]string{io.kubernetes.container.hash: 8d62ed23,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea9cff6ef549c9e1c95d95faa0a5d4f6dc873251aa2962301418ecc8e1bd19c,PodSandboxId:c72e3404ed94f602c911c49e366f1edbd569db1ce4d3674d75dbe21f693e7e00,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721413049734840114,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-859fv,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 27335e38-f907-4c58-bb7d-35f271820bb0,},Annotations:map[string]string{io.kubernetes.container.hash: 19edcaa,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b39d84faccad88505a87e8e42b57ff41c0e79e9e4ea195ffca13d114bce32fb5,PodSandboxId:b10b41564ba7493ee0a0e9b77f804a0bcf48daee663c33c593f0b62dadff85a2,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:17214130
23779956278,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-z49l4,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: e63c3dbb-e45b-4a70-9ab4-47bf7d57ed45,},Annotations:map[string]string{io.kubernetes.container.hash: 2d11ec72,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb809f616606e0f7d078aa271390a7a6ec70c54c6577d8d118609f928b1b3ae,PodSandboxId:eb4d9b2e9b8ffd078be1fb36b070df74da442d4b785c5bdf3ac7fc50919aa2c8,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721413004331278419,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-mg572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b48dfb-deb0-43b5-9a04-9d6896e8c5da,},Annotations:map[string]string{io.kubernetes.container.hash: 9fb5ca14,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20f6f7da0a8f0aedaa398a9f1c9576af2b52c98755486768b6f7b7fa91023439,PodSandboxId:9cd816ea8024a13c3aa5893934c685aa986b15b09c6e74a42c26560bd08b0d8d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c
8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721412970387278736,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18afc862-2719-4e26-be45-80599ec151ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3b65558a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21cd6819933856224140625e54b2907b17f6814e345acc23a0494f0d6706c6c8,PodSandboxId:00a04b0e111a0d51e127bfad1e291a3eaf6717bc8c179590d49a818df0e558c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797
ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721412968676043223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jwrch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a132301-51a7-481d-8442-0d5c256608bd,},Annotations:map[string]string{io.kubernetes.container.hash: d388be4a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7d96760d467535606e3b0d4eadf8bea6d614327b475d5a969e3cb9f4c1348f4,PodSandbo
xId:0c21da4194a4e49c502747e6666308a5d418e18de26772670ea188ec62b96d92,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721412966553102514,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p5fz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480bf6bd-8a9c-4e93-aa4e-1af321bcf057,},Annotations:map[string]string{io.kubernetes.container.hash: 21ea0dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8f8b45a1729c7e1f73c65ba85f66493fde939076da189770cd80cb76bf7f808,PodSandboxId:eb0b90522f2dfe74884b0c77ed0c1
3e975fb86447d8e134b191d5ec758f4264f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721412945440718999,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-990895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fab1af5d38afbcdc021f600a1091666,},Annotations:map[string]string{io.kubernetes.container.hash: 206159ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c4e6fd25057f34a10ab26df93092ca32cd7198404478a6e199b7f5b5f55cd95,PodSandboxId:6c312e29dbb865321a981c1edad10226bc5420d26bf788a1b018dbf178c9c8d9,Metadata:&C
ontainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721412945434291705,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-990895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e35b61a8e413700b1f8556d95cc577c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fea3fc99757566ccb65c857a769b3271978f53e309dd719d714b6cbeca07cd64,PodSandboxId:3bca9b426aad91c7a27013a10362982ea66d35eec9fe54fe1e15b12198966c01,Metadata:&ContainerMetadata{
Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721412945425212638,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-990895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ef6e2e1e8506e1b74fb0ec198106b5c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa9964845eb57345cbac255a2cedb1452583f0a039141166cd96f55ac0bc399d,PodSandboxId:4b8769217aee60786cb08fdf3060f737a07a17a36d434dbff669668275b1bbe3,Metadata:&Containe
rMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721412945433444277,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-990895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 225843ab613cce86ccbcf332bc4d011d,},Annotations:map[string]string{io.kubernetes.container.hash: fbb46eab,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dcee9dce-56d6-42b7-8f20-05abcbbcd608 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:23:06 addons-990895 crio[679]: time="2024-07-19 18:23:06.312709173Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aa41aa48-c47e-4224-870d-681cc3a5e9a2 name=/runtime.v1.RuntimeService/Version
	Jul 19 18:23:06 addons-990895 crio[679]: time="2024-07-19 18:23:06.312791428Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aa41aa48-c47e-4224-870d-681cc3a5e9a2 name=/runtime.v1.RuntimeService/Version
	Jul 19 18:23:06 addons-990895 crio[679]: time="2024-07-19 18:23:06.313682581Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=734eaf45-7c55-43fd-a7c3-409668139e08 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 18:23:06 addons-990895 crio[679]: time="2024-07-19 18:23:06.315253618Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721413386315226594,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580634,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=734eaf45-7c55-43fd-a7c3-409668139e08 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 18:23:06 addons-990895 crio[679]: time="2024-07-19 18:23:06.315789289Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cad18cb6-fb7f-45a7-9a28-a84b96c62970 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:23:06 addons-990895 crio[679]: time="2024-07-19 18:23:06.315841711Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cad18cb6-fb7f-45a7-9a28-a84b96c62970 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:23:06 addons-990895 crio[679]: time="2024-07-19 18:23:06.316275147Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0256ab93b757ea6e892b9ca172de7bada767616432e55efa7865227d9ef55316,PodSandboxId:79621fa66f4186c950d43e109b2ad001e6ef61aa114479906017e52c2e3c4399,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721413228225885450,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-m4qdd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 26248856-8eff-4f35-a658-945273146c0f,},Annotations:map[string]string{io.kubernetes.container.hash: ca3bfd36,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:006cea3b2166cbab72fbc2418ee4eec7acb50c8499e6449507dc2817f66f2491,PodSandboxId:9d3e46f75e8d2ee7a4bff0c81dd78ec719415de2d262c96bf2cdb43bd02fecde,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721413086837852985,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65ee0f77-c5d6-40a1-9307-10d20caec993,},Annotations:map[string]string{io.kubernet
es.container.hash: 1c06ed37,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9830b71c2bcaf1ace67d64ad1ffae76d66b8a540e5c1cd447c9a1a459a942813,PodSandboxId:2c3a546e0577e211fc294882112576f3a7b4c0b26d4ab7e410e77c5d57e35c61,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721413067762223752,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-ggx6p,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 3efb6210-449e-46c2-b799-ac0a331ba22f,},Annotations:map[string]string{io.kubernetes.container.hash: 8d62ed23,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea9cff6ef549c9e1c95d95faa0a5d4f6dc873251aa2962301418ecc8e1bd19c,PodSandboxId:c72e3404ed94f602c911c49e366f1edbd569db1ce4d3674d75dbe21f693e7e00,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721413049734840114,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-859fv,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 27335e38-f907-4c58-bb7d-35f271820bb0,},Annotations:map[string]string{io.kubernetes.container.hash: 19edcaa,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b39d84faccad88505a87e8e42b57ff41c0e79e9e4ea195ffca13d114bce32fb5,PodSandboxId:b10b41564ba7493ee0a0e9b77f804a0bcf48daee663c33c593f0b62dadff85a2,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:17214130
23779956278,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-z49l4,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: e63c3dbb-e45b-4a70-9ab4-47bf7d57ed45,},Annotations:map[string]string{io.kubernetes.container.hash: 2d11ec72,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb809f616606e0f7d078aa271390a7a6ec70c54c6577d8d118609f928b1b3ae,PodSandboxId:eb4d9b2e9b8ffd078be1fb36b070df74da442d4b785c5bdf3ac7fc50919aa2c8,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721413004331278419,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-mg572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b48dfb-deb0-43b5-9a04-9d6896e8c5da,},Annotations:map[string]string{io.kubernetes.container.hash: 9fb5ca14,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20f6f7da0a8f0aedaa398a9f1c9576af2b52c98755486768b6f7b7fa91023439,PodSandboxId:9cd816ea8024a13c3aa5893934c685aa986b15b09c6e74a42c26560bd08b0d8d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c
8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721412970387278736,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18afc862-2719-4e26-be45-80599ec151ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3b65558a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21cd6819933856224140625e54b2907b17f6814e345acc23a0494f0d6706c6c8,PodSandboxId:00a04b0e111a0d51e127bfad1e291a3eaf6717bc8c179590d49a818df0e558c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797
ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721412968676043223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jwrch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a132301-51a7-481d-8442-0d5c256608bd,},Annotations:map[string]string{io.kubernetes.container.hash: d388be4a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7d96760d467535606e3b0d4eadf8bea6d614327b475d5a969e3cb9f4c1348f4,PodSandbo
xId:0c21da4194a4e49c502747e6666308a5d418e18de26772670ea188ec62b96d92,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721412966553102514,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p5fz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480bf6bd-8a9c-4e93-aa4e-1af321bcf057,},Annotations:map[string]string{io.kubernetes.container.hash: 21ea0dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8f8b45a1729c7e1f73c65ba85f66493fde939076da189770cd80cb76bf7f808,PodSandboxId:eb0b90522f2dfe74884b0c77ed0c1
3e975fb86447d8e134b191d5ec758f4264f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721412945440718999,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-990895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fab1af5d38afbcdc021f600a1091666,},Annotations:map[string]string{io.kubernetes.container.hash: 206159ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c4e6fd25057f34a10ab26df93092ca32cd7198404478a6e199b7f5b5f55cd95,PodSandboxId:6c312e29dbb865321a981c1edad10226bc5420d26bf788a1b018dbf178c9c8d9,Metadata:&C
ontainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721412945434291705,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-990895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e35b61a8e413700b1f8556d95cc577c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fea3fc99757566ccb65c857a769b3271978f53e309dd719d714b6cbeca07cd64,PodSandboxId:3bca9b426aad91c7a27013a10362982ea66d35eec9fe54fe1e15b12198966c01,Metadata:&ContainerMetadata{
Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721412945425212638,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-990895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ef6e2e1e8506e1b74fb0ec198106b5c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa9964845eb57345cbac255a2cedb1452583f0a039141166cd96f55ac0bc399d,PodSandboxId:4b8769217aee60786cb08fdf3060f737a07a17a36d434dbff669668275b1bbe3,Metadata:&Containe
rMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721412945433444277,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-990895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 225843ab613cce86ccbcf332bc4d011d,},Annotations:map[string]string{io.kubernetes.container.hash: fbb46eab,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cad18cb6-fb7f-45a7-9a28-a84b96c62970 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:23:06 addons-990895 crio[679]: time="2024-07-19 18:23:06.349943466Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0ca5e521-6273-42a7-843f-0dd25489a29c name=/runtime.v1.RuntimeService/Version
	Jul 19 18:23:06 addons-990895 crio[679]: time="2024-07-19 18:23:06.350026244Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0ca5e521-6273-42a7-843f-0dd25489a29c name=/runtime.v1.RuntimeService/Version
	Jul 19 18:23:06 addons-990895 crio[679]: time="2024-07-19 18:23:06.350956304Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=92212f1c-35c4-486b-a915-c50545dd3088 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 18:23:06 addons-990895 crio[679]: time="2024-07-19 18:23:06.352133135Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721413386352107565,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580634,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=92212f1c-35c4-486b-a915-c50545dd3088 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 18:23:06 addons-990895 crio[679]: time="2024-07-19 18:23:06.352763161Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=abd67f93-0066-46d9-a2a8-5828db7eda84 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:23:06 addons-990895 crio[679]: time="2024-07-19 18:23:06.352818631Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=abd67f93-0066-46d9-a2a8-5828db7eda84 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:23:06 addons-990895 crio[679]: time="2024-07-19 18:23:06.353081163Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0256ab93b757ea6e892b9ca172de7bada767616432e55efa7865227d9ef55316,PodSandboxId:79621fa66f4186c950d43e109b2ad001e6ef61aa114479906017e52c2e3c4399,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721413228225885450,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-m4qdd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 26248856-8eff-4f35-a658-945273146c0f,},Annotations:map[string]string{io.kubernetes.container.hash: ca3bfd36,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:006cea3b2166cbab72fbc2418ee4eec7acb50c8499e6449507dc2817f66f2491,PodSandboxId:9d3e46f75e8d2ee7a4bff0c81dd78ec719415de2d262c96bf2cdb43bd02fecde,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721413086837852985,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65ee0f77-c5d6-40a1-9307-10d20caec993,},Annotations:map[string]string{io.kubernet
es.container.hash: 1c06ed37,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9830b71c2bcaf1ace67d64ad1ffae76d66b8a540e5c1cd447c9a1a459a942813,PodSandboxId:2c3a546e0577e211fc294882112576f3a7b4c0b26d4ab7e410e77c5d57e35c61,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721413067762223752,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-ggx6p,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 3efb6210-449e-46c2-b799-ac0a331ba22f,},Annotations:map[string]string{io.kubernetes.container.hash: 8d62ed23,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea9cff6ef549c9e1c95d95faa0a5d4f6dc873251aa2962301418ecc8e1bd19c,PodSandboxId:c72e3404ed94f602c911c49e366f1edbd569db1ce4d3674d75dbe21f693e7e00,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721413049734840114,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-859fv,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 27335e38-f907-4c58-bb7d-35f271820bb0,},Annotations:map[string]string{io.kubernetes.container.hash: 19edcaa,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b39d84faccad88505a87e8e42b57ff41c0e79e9e4ea195ffca13d114bce32fb5,PodSandboxId:b10b41564ba7493ee0a0e9b77f804a0bcf48daee663c33c593f0b62dadff85a2,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:17214130
23779956278,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-z49l4,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: e63c3dbb-e45b-4a70-9ab4-47bf7d57ed45,},Annotations:map[string]string{io.kubernetes.container.hash: 2d11ec72,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb809f616606e0f7d078aa271390a7a6ec70c54c6577d8d118609f928b1b3ae,PodSandboxId:eb4d9b2e9b8ffd078be1fb36b070df74da442d4b785c5bdf3ac7fc50919aa2c8,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721413004331278419,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-mg572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b48dfb-deb0-43b5-9a04-9d6896e8c5da,},Annotations:map[string]string{io.kubernetes.container.hash: 9fb5ca14,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20f6f7da0a8f0aedaa398a9f1c9576af2b52c98755486768b6f7b7fa91023439,PodSandboxId:9cd816ea8024a13c3aa5893934c685aa986b15b09c6e74a42c26560bd08b0d8d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c
8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721412970387278736,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18afc862-2719-4e26-be45-80599ec151ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3b65558a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21cd6819933856224140625e54b2907b17f6814e345acc23a0494f0d6706c6c8,PodSandboxId:00a04b0e111a0d51e127bfad1e291a3eaf6717bc8c179590d49a818df0e558c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797
ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721412968676043223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jwrch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a132301-51a7-481d-8442-0d5c256608bd,},Annotations:map[string]string{io.kubernetes.container.hash: d388be4a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7d96760d467535606e3b0d4eadf8bea6d614327b475d5a969e3cb9f4c1348f4,PodSandbo
xId:0c21da4194a4e49c502747e6666308a5d418e18de26772670ea188ec62b96d92,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721412966553102514,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p5fz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480bf6bd-8a9c-4e93-aa4e-1af321bcf057,},Annotations:map[string]string{io.kubernetes.container.hash: 21ea0dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8f8b45a1729c7e1f73c65ba85f66493fde939076da189770cd80cb76bf7f808,PodSandboxId:eb0b90522f2dfe74884b0c77ed0c1
3e975fb86447d8e134b191d5ec758f4264f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721412945440718999,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-990895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fab1af5d38afbcdc021f600a1091666,},Annotations:map[string]string{io.kubernetes.container.hash: 206159ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c4e6fd25057f34a10ab26df93092ca32cd7198404478a6e199b7f5b5f55cd95,PodSandboxId:6c312e29dbb865321a981c1edad10226bc5420d26bf788a1b018dbf178c9c8d9,Metadata:&C
ontainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721412945434291705,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-990895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e35b61a8e413700b1f8556d95cc577c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fea3fc99757566ccb65c857a769b3271978f53e309dd719d714b6cbeca07cd64,PodSandboxId:3bca9b426aad91c7a27013a10362982ea66d35eec9fe54fe1e15b12198966c01,Metadata:&ContainerMetadata{
Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721412945425212638,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-990895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ef6e2e1e8506e1b74fb0ec198106b5c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa9964845eb57345cbac255a2cedb1452583f0a039141166cd96f55ac0bc399d,PodSandboxId:4b8769217aee60786cb08fdf3060f737a07a17a36d434dbff669668275b1bbe3,Metadata:&Containe
rMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721412945433444277,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-990895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 225843ab613cce86ccbcf332bc4d011d,},Annotations:map[string]string{io.kubernetes.container.hash: fbb46eab,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=abd67f93-0066-46d9-a2a8-5828db7eda84 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:23:06 addons-990895 crio[679]: time="2024-07-19 18:23:06.388559878Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2f8a1473-7bce-41fa-9241-fa1299d15b08 name=/runtime.v1.RuntimeService/Version
	Jul 19 18:23:06 addons-990895 crio[679]: time="2024-07-19 18:23:06.388629775Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2f8a1473-7bce-41fa-9241-fa1299d15b08 name=/runtime.v1.RuntimeService/Version
	Jul 19 18:23:06 addons-990895 crio[679]: time="2024-07-19 18:23:06.389818340Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=711646a3-9ec6-4f2c-95ad-f16361d5709a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 18:23:06 addons-990895 crio[679]: time="2024-07-19 18:23:06.390990394Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721413386390965502,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580634,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=711646a3-9ec6-4f2c-95ad-f16361d5709a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 18:23:06 addons-990895 crio[679]: time="2024-07-19 18:23:06.391738108Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4e51758e-d198-48dd-b152-568f19bafccc name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:23:06 addons-990895 crio[679]: time="2024-07-19 18:23:06.391791438Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4e51758e-d198-48dd-b152-568f19bafccc name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:23:06 addons-990895 crio[679]: time="2024-07-19 18:23:06.392190543Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0256ab93b757ea6e892b9ca172de7bada767616432e55efa7865227d9ef55316,PodSandboxId:79621fa66f4186c950d43e109b2ad001e6ef61aa114479906017e52c2e3c4399,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721413228225885450,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-m4qdd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 26248856-8eff-4f35-a658-945273146c0f,},Annotations:map[string]string{io.kubernetes.container.hash: ca3bfd36,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:006cea3b2166cbab72fbc2418ee4eec7acb50c8499e6449507dc2817f66f2491,PodSandboxId:9d3e46f75e8d2ee7a4bff0c81dd78ec719415de2d262c96bf2cdb43bd02fecde,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721413086837852985,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65ee0f77-c5d6-40a1-9307-10d20caec993,},Annotations:map[string]string{io.kubernet
es.container.hash: 1c06ed37,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9830b71c2bcaf1ace67d64ad1ffae76d66b8a540e5c1cd447c9a1a459a942813,PodSandboxId:2c3a546e0577e211fc294882112576f3a7b4c0b26d4ab7e410e77c5d57e35c61,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721413067762223752,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-ggx6p,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 3efb6210-449e-46c2-b799-ac0a331ba22f,},Annotations:map[string]string{io.kubernetes.container.hash: 8d62ed23,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aea9cff6ef549c9e1c95d95faa0a5d4f6dc873251aa2962301418ecc8e1bd19c,PodSandboxId:c72e3404ed94f602c911c49e366f1edbd569db1ce4d3674d75dbe21f693e7e00,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721413049734840114,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-859fv,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 27335e38-f907-4c58-bb7d-35f271820bb0,},Annotations:map[string]string{io.kubernetes.container.hash: 19edcaa,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b39d84faccad88505a87e8e42b57ff41c0e79e9e4ea195ffca13d114bce32fb5,PodSandboxId:b10b41564ba7493ee0a0e9b77f804a0bcf48daee663c33c593f0b62dadff85a2,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:17214130
23779956278,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-z49l4,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: e63c3dbb-e45b-4a70-9ab4-47bf7d57ed45,},Annotations:map[string]string{io.kubernetes.container.hash: 2d11ec72,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb809f616606e0f7d078aa271390a7a6ec70c54c6577d8d118609f928b1b3ae,PodSandboxId:eb4d9b2e9b8ffd078be1fb36b070df74da442d4b785c5bdf3ac7fc50919aa2c8,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721413004331278419,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-mg572,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b48dfb-deb0-43b5-9a04-9d6896e8c5da,},Annotations:map[string]string{io.kubernetes.container.hash: 9fb5ca14,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20f6f7da0a8f0aedaa398a9f1c9576af2b52c98755486768b6f7b7fa91023439,PodSandboxId:9cd816ea8024a13c3aa5893934c685aa986b15b09c6e74a42c26560bd08b0d8d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c
8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721412970387278736,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18afc862-2719-4e26-be45-80599ec151ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3b65558a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21cd6819933856224140625e54b2907b17f6814e345acc23a0494f0d6706c6c8,PodSandboxId:00a04b0e111a0d51e127bfad1e291a3eaf6717bc8c179590d49a818df0e558c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797
ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721412968676043223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jwrch,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a132301-51a7-481d-8442-0d5c256608bd,},Annotations:map[string]string{io.kubernetes.container.hash: d388be4a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7d96760d467535606e3b0d4eadf8bea6d614327b475d5a969e3cb9f4c1348f4,PodSandbo
xId:0c21da4194a4e49c502747e6666308a5d418e18de26772670ea188ec62b96d92,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721412966553102514,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7p5fz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480bf6bd-8a9c-4e93-aa4e-1af321bcf057,},Annotations:map[string]string{io.kubernetes.container.hash: 21ea0dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8f8b45a1729c7e1f73c65ba85f66493fde939076da189770cd80cb76bf7f808,PodSandboxId:eb0b90522f2dfe74884b0c77ed0c1
3e975fb86447d8e134b191d5ec758f4264f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721412945440718999,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-990895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fab1af5d38afbcdc021f600a1091666,},Annotations:map[string]string{io.kubernetes.container.hash: 206159ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c4e6fd25057f34a10ab26df93092ca32cd7198404478a6e199b7f5b5f55cd95,PodSandboxId:6c312e29dbb865321a981c1edad10226bc5420d26bf788a1b018dbf178c9c8d9,Metadata:&C
ontainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721412945434291705,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-990895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e35b61a8e413700b1f8556d95cc577c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fea3fc99757566ccb65c857a769b3271978f53e309dd719d714b6cbeca07cd64,PodSandboxId:3bca9b426aad91c7a27013a10362982ea66d35eec9fe54fe1e15b12198966c01,Metadata:&ContainerMetadata{
Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721412945425212638,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-990895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ef6e2e1e8506e1b74fb0ec198106b5c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa9964845eb57345cbac255a2cedb1452583f0a039141166cd96f55ac0bc399d,PodSandboxId:4b8769217aee60786cb08fdf3060f737a07a17a36d434dbff669668275b1bbe3,Metadata:&Containe
rMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721412945433444277,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-990895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 225843ab613cce86ccbcf332bc4d011d,},Annotations:map[string]string{io.kubernetes.container.hash: fbb46eab,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4e51758e-d198-48dd-b152-568f19bafccc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0256ab93b757e       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   79621fa66f418       hello-world-app-6778b5fc9f-m4qdd
	006cea3b2166c       docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55                         4 minutes ago       Running             nginx                     0                   9d3e46f75e8d2       nginx
	9830b71c2bcaf       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                   5 minutes ago       Running             headlamp                  0                   2c3a546e0577e       headlamp-7867546754-ggx6p
	aea9cff6ef549       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            5 minutes ago       Running             gcp-auth                  0                   c72e3404ed94f       gcp-auth-5db96cd9b4-859fv
	b39d84faccad8       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                         6 minutes ago       Running             yakd                      0                   b10b41564ba74       yakd-dashboard-799879c74f-z49l4
	3eb809f616606       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   6 minutes ago       Running             metrics-server            0                   eb4d9b2e9b8ff       metrics-server-c59844bb4-mg572
	20f6f7da0a8f0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        6 minutes ago       Running             storage-provisioner       0                   9cd816ea8024a       storage-provisioner
	21cd681993385       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        6 minutes ago       Running             coredns                   0                   00a04b0e111a0       coredns-7db6d8ff4d-jwrch
	c7d96760d4675       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                        6 minutes ago       Running             kube-proxy                0                   0c21da4194a4e       kube-proxy-7p5fz
	a8f8b45a1729c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        7 minutes ago       Running             etcd                      0                   eb0b90522f2df       etcd-addons-990895
	8c4e6fd25057f       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                        7 minutes ago       Running             kube-scheduler            0                   6c312e29dbb86       kube-scheduler-addons-990895
	fa9964845eb57       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                        7 minutes ago       Running             kube-apiserver            0                   4b8769217aee6       kube-apiserver-addons-990895
	fea3fc9975756       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                        7 minutes ago       Running             kube-controller-manager   0                   3bca9b426aad9       kube-controller-manager-addons-990895
	
	
	==> coredns [21cd6819933856224140625e54b2907b17f6814e345acc23a0494f0d6706c6c8] <==
	[INFO] 10.244.0.6:41554 - 50086 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000112725s
	[INFO] 10.244.0.6:57414 - 60528 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000100214s
	[INFO] 10.244.0.6:57414 - 26493 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000110981s
	[INFO] 10.244.0.6:39655 - 15772 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000086603s
	[INFO] 10.244.0.6:39655 - 3746 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000056977s
	[INFO] 10.244.0.6:37715 - 22527 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000147521s
	[INFO] 10.244.0.6:37715 - 5373 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000191802s
	[INFO] 10.244.0.6:40998 - 4681 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000104731s
	[INFO] 10.244.0.6:40998 - 20820 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000029699s
	[INFO] 10.244.0.6:39074 - 4806 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000062341s
	[INFO] 10.244.0.6:39074 - 37688 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000026965s
	[INFO] 10.244.0.6:54863 - 21888 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00005239s
	[INFO] 10.244.0.6:54863 - 16002 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000027191s
	[INFO] 10.244.0.6:49680 - 5507 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000038985s
	[INFO] 10.244.0.6:49680 - 32924 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000045142s
	[INFO] 10.244.0.21:47372 - 18729 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000524215s
	[INFO] 10.244.0.21:42728 - 41363 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000585279s
	[INFO] 10.244.0.21:55559 - 12452 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000161149s
	[INFO] 10.244.0.21:36444 - 26844 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000109007s
	[INFO] 10.244.0.21:42041 - 28288 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000101659s
	[INFO] 10.244.0.21:36779 - 43252 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000150629s
	[INFO] 10.244.0.21:43719 - 37933 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000367s
	[INFO] 10.244.0.21:56968 - 25142 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.000600053s
	[INFO] 10.244.0.24:47193 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00135043s
	[INFO] 10.244.0.24:33629 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000112674s
	
	
	==> describe nodes <==
	Name:               addons-990895
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-990895
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221
	                    minikube.k8s.io/name=addons-990895
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T18_15_50_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-990895
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 18:15:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-990895
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 18:22:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 18:20:57 +0000   Fri, 19 Jul 2024 18:15:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 18:20:57 +0000   Fri, 19 Jul 2024 18:15:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 18:20:57 +0000   Fri, 19 Jul 2024 18:15:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 18:20:57 +0000   Fri, 19 Jul 2024 18:15:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.239
	  Hostname:    addons-990895
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 4f93e701e37e4a1d91833568e7381528
	  System UUID:                4f93e701-e37e-4a1d-9183-3568e7381528
	  Boot ID:                    375557a4-c3ff-4528-aec2-4e8e0e5a58cf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-6778b5fc9f-m4qdd         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m41s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  gcp-auth                    gcp-auth-5db96cd9b4-859fv                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m52s
	  headlamp                    headlamp-7867546754-ggx6p                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m26s
	  kube-system                 coredns-7db6d8ff4d-jwrch                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     7m2s
	  kube-system                 etcd-addons-990895                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         7m16s
	  kube-system                 kube-apiserver-addons-990895             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m16s
	  kube-system                 kube-controller-manager-addons-990895    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m16s
	  kube-system                 kube-proxy-7p5fz                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m2s
	  kube-system                 kube-scheduler-addons-990895             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m16s
	  kube-system                 metrics-server-c59844bb4-mg572           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         6m57s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m58s
	  yakd-dashboard              yakd-dashboard-799879c74f-z49l4          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     6m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m56s  kube-proxy       
	  Normal  Starting                 7m16s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m16s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m16s  kubelet          Node addons-990895 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m16s  kubelet          Node addons-990895 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m16s  kubelet          Node addons-990895 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m15s  kubelet          Node addons-990895 status is now: NodeReady
	  Normal  RegisteredNode           7m2s   node-controller  Node addons-990895 event: Registered Node addons-990895 in Controller
	
	
	==> dmesg <==
	[  +6.866741] kauditd_printk_skb: 18 callbacks suppressed
	[Jul19 18:16] systemd-fstab-generator[1489]: Ignoring "noauto" option for root device
	[  +5.170765] kauditd_printk_skb: 84 callbacks suppressed
	[  +5.206798] kauditd_printk_skb: 164 callbacks suppressed
	[ +10.802979] kauditd_printk_skb: 46 callbacks suppressed
	[ +16.607031] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.583469] kauditd_printk_skb: 8 callbacks suppressed
	[Jul19 18:17] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.362704] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.104312] kauditd_printk_skb: 63 callbacks suppressed
	[  +5.770955] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.004650] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.029535] kauditd_printk_skb: 16 callbacks suppressed
	[  +6.307196] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.528522] kauditd_printk_skb: 42 callbacks suppressed
	[  +5.894844] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.053126] kauditd_printk_skb: 36 callbacks suppressed
	[  +5.463056] kauditd_printk_skb: 39 callbacks suppressed
	[Jul19 18:18] kauditd_printk_skb: 29 callbacks suppressed
	[ +19.949442] kauditd_printk_skb: 2 callbacks suppressed
	[  +7.059099] kauditd_printk_skb: 3 callbacks suppressed
	[ +19.126366] kauditd_printk_skb: 7 callbacks suppressed
	[Jul19 18:19] kauditd_printk_skb: 33 callbacks suppressed
	[Jul19 18:20] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.151290] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [a8f8b45a1729c7e1f73c65ba85f66493fde939076da189770cd80cb76bf7f808] <==
	{"level":"warn","ts":"2024-07-19T18:17:26.431073Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T18:17:26.010081Z","time spent":"420.987177ms","remote":"127.0.0.1:47686","response type":"/etcdserverpb.KV/Range","request count":0,"request size":66,"response count":1,"response size":4619,"request content":"key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-xxz68\" "}
	{"level":"info","ts":"2024-07-19T18:17:26.43064Z","caller":"traceutil/trace.go:171","msg":"trace[1604364992] linearizableReadLoop","detail":"{readStateIndex:1157; appliedIndex:1156; }","duration":"419.954168ms","start":"2024-07-19T18:17:26.010117Z","end":"2024-07-19T18:17:26.430071Z","steps":["trace[1604364992] 'read index received'  (duration: 413.28645ms)","trace[1604364992] 'applied index is now lower than readState.Index'  (duration: 6.666969ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T18:17:26.432734Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"395.433021ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"info","ts":"2024-07-19T18:17:26.4328Z","caller":"traceutil/trace.go:171","msg":"trace[1971354546] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1125; }","duration":"395.525265ms","start":"2024-07-19T18:17:26.037266Z","end":"2024-07-19T18:17:26.432791Z","steps":["trace[1971354546] 'agreement among raft nodes before linearized reading'  (duration: 395.352742ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T18:17:26.432844Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T18:17:26.037251Z","time spent":"395.584389ms","remote":"127.0.0.1:47686","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":11476,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"info","ts":"2024-07-19T18:17:31.230495Z","caller":"traceutil/trace.go:171","msg":"trace[1378868475] transaction","detail":"{read_only:false; response_revision:1161; number_of_response:1; }","duration":"145.171608ms","start":"2024-07-19T18:17:31.0853Z","end":"2024-07-19T18:17:31.230471Z","steps":["trace[1378868475] 'process raft request'  (duration: 144.821113ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T18:17:47.654447Z","caller":"traceutil/trace.go:171","msg":"trace[46878725] linearizableReadLoop","detail":"{readStateIndex:1351; appliedIndex:1350; }","duration":"362.054947ms","start":"2024-07-19T18:17:47.292375Z","end":"2024-07-19T18:17:47.65443Z","steps":["trace[46878725] 'read index received'  (duration: 361.770592ms)","trace[46878725] 'applied index is now lower than readState.Index'  (duration: 283.682µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T18:17:47.654539Z","caller":"traceutil/trace.go:171","msg":"trace[1664972505] transaction","detail":"{read_only:false; response_revision:1311; number_of_response:1; }","duration":"401.863233ms","start":"2024-07-19T18:17:47.252668Z","end":"2024-07-19T18:17:47.654532Z","steps":["trace[1664972505] 'process raft request'  (duration: 401.516738ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T18:17:47.654716Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T18:17:47.252656Z","time spent":"401.901611ms","remote":"127.0.0.1:47740","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":537,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1265 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:450 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"warn","ts":"2024-07-19T18:17:47.654861Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"362.496699ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-07-19T18:17:47.654898Z","caller":"traceutil/trace.go:171","msg":"trace[770314431] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1311; }","duration":"362.599042ms","start":"2024-07-19T18:17:47.292291Z","end":"2024-07-19T18:17:47.65489Z","steps":["trace[770314431] 'agreement among raft nodes before linearized reading'  (duration: 362.507874ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T18:17:47.654919Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T18:17:47.292276Z","time spent":"362.636437ms","remote":"127.0.0.1:47672","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1136,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2024-07-19T18:17:47.655071Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"351.449591ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3966"}
	{"level":"info","ts":"2024-07-19T18:17:47.655102Z","caller":"traceutil/trace.go:171","msg":"trace[683584421] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1311; }","duration":"351.499102ms","start":"2024-07-19T18:17:47.303597Z","end":"2024-07-19T18:17:47.655096Z","steps":["trace[683584421] 'agreement among raft nodes before linearized reading'  (duration: 351.427148ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T18:17:47.655128Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T18:17:47.303583Z","time spent":"351.534179ms","remote":"127.0.0.1:47686","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":1,"response size":3989,"request content":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" "}
	{"level":"info","ts":"2024-07-19T18:18:26.206107Z","caller":"traceutil/trace.go:171","msg":"trace[1475274680] linearizableReadLoop","detail":"{readStateIndex:1587; appliedIndex:1586; }","duration":"334.91608ms","start":"2024-07-19T18:18:25.871155Z","end":"2024-07-19T18:18:26.206071Z","steps":["trace[1475274680] 'read index received'  (duration: 334.711855ms)","trace[1475274680] 'applied index is now lower than readState.Index'  (duration: 203.431µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T18:18:26.206283Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"335.106455ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-07-19T18:18:26.20628Z","caller":"traceutil/trace.go:171","msg":"trace[1277275665] transaction","detail":"{read_only:false; response_revision:1534; number_of_response:1; }","duration":"385.997061ms","start":"2024-07-19T18:18:25.820263Z","end":"2024-07-19T18:18:26.20626Z","steps":["trace[1277275665] 'process raft request'  (duration: 385.621381ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T18:18:26.20631Z","caller":"traceutil/trace.go:171","msg":"trace[4348081] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1534; }","duration":"335.175699ms","start":"2024-07-19T18:18:25.871126Z","end":"2024-07-19T18:18:26.206302Z","steps":["trace[4348081] 'agreement among raft nodes before linearized reading'  (duration: 335.071357ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T18:18:26.206371Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T18:18:25.871107Z","time spent":"335.257667ms","remote":"127.0.0.1:47672","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1136,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2024-07-19T18:18:26.206428Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T18:18:25.820251Z","time spent":"386.062518ms","remote":"127.0.0.1:47740","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1526 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"info","ts":"2024-07-19T18:18:58.598037Z","caller":"traceutil/trace.go:171","msg":"trace[1931559067] linearizableReadLoop","detail":"{readStateIndex:1787; appliedIndex:1786; }","duration":"145.883721ms","start":"2024-07-19T18:18:58.452125Z","end":"2024-07-19T18:18:58.598009Z","steps":["trace[1931559067] 'read index received'  (duration: 145.692277ms)","trace[1931559067] 'applied index is now lower than readState.Index'  (duration: 190.494µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T18:18:58.598264Z","caller":"traceutil/trace.go:171","msg":"trace[918694199] transaction","detail":"{read_only:false; response_revision:1722; number_of_response:1; }","duration":"234.747826ms","start":"2024-07-19T18:18:58.363501Z","end":"2024-07-19T18:18:58.598249Z","steps":["trace[918694199] 'process raft request'  (duration: 234.371754ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T18:18:58.598435Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.161607ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/external-snapshotter-runner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-19T18:18:58.598494Z","caller":"traceutil/trace.go:171","msg":"trace[660000056] range","detail":"{range_begin:/registry/clusterroles/external-snapshotter-runner; range_end:; response_count:0; response_revision:1722; }","duration":"146.381682ms","start":"2024-07-19T18:18:58.4521Z","end":"2024-07-19T18:18:58.598482Z","steps":["trace[660000056] 'agreement among raft nodes before linearized reading'  (duration: 146.157251ms)"],"step_count":1}
	
	
	==> gcp-auth [aea9cff6ef549c9e1c95d95faa0a5d4f6dc873251aa2962301418ecc8e1bd19c] <==
	2024/07/19 18:17:29 GCP Auth Webhook started!
	2024/07/19 18:17:38 Ready to marshal response ...
	2024/07/19 18:17:38 Ready to write response ...
	2024/07/19 18:17:38 Ready to marshal response ...
	2024/07/19 18:17:38 Ready to write response ...
	2024/07/19 18:17:40 Ready to marshal response ...
	2024/07/19 18:17:40 Ready to write response ...
	2024/07/19 18:17:40 Ready to marshal response ...
	2024/07/19 18:17:40 Ready to write response ...
	2024/07/19 18:17:40 Ready to marshal response ...
	2024/07/19 18:17:40 Ready to write response ...
	2024/07/19 18:17:43 Ready to marshal response ...
	2024/07/19 18:17:43 Ready to write response ...
	2024/07/19 18:17:55 Ready to marshal response ...
	2024/07/19 18:17:55 Ready to write response ...
	2024/07/19 18:17:56 Ready to marshal response ...
	2024/07/19 18:17:56 Ready to write response ...
	2024/07/19 18:18:02 Ready to marshal response ...
	2024/07/19 18:18:02 Ready to write response ...
	2024/07/19 18:18:20 Ready to marshal response ...
	2024/07/19 18:18:20 Ready to write response ...
	2024/07/19 18:18:45 Ready to marshal response ...
	2024/07/19 18:18:45 Ready to write response ...
	2024/07/19 18:20:25 Ready to marshal response ...
	2024/07/19 18:20:25 Ready to write response ...
	
	
	==> kernel <==
	 18:23:06 up 7 min,  0 users,  load average: 0.13, 0.66, 0.46
	Linux addons-990895 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [fa9964845eb57345cbac255a2cedb1452583f0a039141166cd96f55ac0bc399d] <==
	E0719 18:17:49.611616       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0719 18:17:49.611942       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.63.79:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.63.79:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.63.79:443: connect: connection refused
	E0719 18:17:49.617452       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.63.79:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.63.79:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.63.79:443: connect: connection refused
	E0719 18:17:49.638144       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.63.79:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.63.79:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.63.79:443: connect: connection refused
	E0719 18:17:49.679826       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.63.79:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.63.79:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.63.79:443: connect: connection refused
	I0719 18:17:49.804983       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0719 18:18:01.915210       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0719 18:18:02.109760       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.44.136"}
	E0719 18:18:12.273573       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0719 18:18:32.636156       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0719 18:19:01.024666       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 18:19:01.027742       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 18:19:01.053155       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 18:19:01.053194       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 18:19:01.061914       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 18:19:01.061969       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 18:19:01.077006       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 18:19:01.077058       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 18:19:01.096677       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 18:19:01.096722       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0719 18:19:02.062499       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0719 18:19:02.096986       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0719 18:19:02.107524       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0719 18:20:25.530461       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.38.15"}
	
	
	==> kube-controller-manager [fea3fc99757566ccb65c857a769b3271978f53e309dd719d714b6cbeca07cd64] <==
	W0719 18:20:50.343922       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 18:20:50.344017       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 18:21:03.870017       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 18:21:03.870096       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 18:21:23.236931       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 18:21:23.236968       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 18:21:28.120204       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 18:21:28.120377       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 18:21:34.561979       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 18:21:34.562039       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 18:21:50.827717       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 18:21:50.827771       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 18:21:57.558160       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 18:21:57.558215       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 18:22:00.042085       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 18:22:00.042191       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 18:22:34.351041       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 18:22:34.351221       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 18:22:34.658885       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 18:22:34.659021       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 18:22:35.856784       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 18:22:35.856932       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 18:22:39.728744       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 18:22:39.728793       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0719 18:23:05.394801       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="8.622µs"
	
	
	==> kube-proxy [c7d96760d467535606e3b0d4eadf8bea6d614327b475d5a969e3cb9f4c1348f4] <==
	I0719 18:16:08.959991       1 server_linux.go:69] "Using iptables proxy"
	I0719 18:16:09.169414       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.239"]
	I0719 18:16:09.975251       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 18:16:09.975313       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 18:16:09.986136       1 server_linux.go:165] "Using iptables Proxier"
	I0719 18:16:10.037635       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 18:16:10.037868       1 server.go:872] "Version info" version="v1.30.3"
	I0719 18:16:10.037882       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 18:16:10.068283       1 config.go:192] "Starting service config controller"
	I0719 18:16:10.068300       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 18:16:10.068394       1 config.go:101] "Starting endpoint slice config controller"
	I0719 18:16:10.068399       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 18:16:10.069051       1 config.go:319] "Starting node config controller"
	I0719 18:16:10.069060       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 18:16:10.170978       1 shared_informer.go:320] Caches are synced for node config
	I0719 18:16:10.171019       1 shared_informer.go:320] Caches are synced for service config
	I0719 18:16:10.171049       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [8c4e6fd25057f34a10ab26df93092ca32cd7198404478a6e199b7f5b5f55cd95] <==
	W0719 18:15:47.912471       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 18:15:47.912480       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0719 18:15:47.914646       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0719 18:15:47.914684       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0719 18:15:47.914726       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 18:15:47.914750       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0719 18:15:47.917769       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0719 18:15:47.917811       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0719 18:15:47.918042       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 18:15:47.923061       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 18:15:47.918114       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0719 18:15:47.923119       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0719 18:15:47.918172       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 18:15:47.923169       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0719 18:15:47.918208       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0719 18:15:47.923197       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0719 18:15:48.787680       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 18:15:48.787808       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 18:15:48.915781       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0719 18:15:48.915825       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0719 18:15:48.944943       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0719 18:15:48.944988       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0719 18:15:49.167608       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 18:15:49.167652       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0719 18:15:52.201529       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 19 18:20:31 addons-990895 kubelet[1270]: I0719 18:20:31.016298    1270 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"685d5d12d9f38938443fdb4dcc3696c39a0ad35ad8383ae6734bbf35ece92558"} err="failed to get container status \"685d5d12d9f38938443fdb4dcc3696c39a0ad35ad8383ae6734bbf35ece92558\": rpc error: code = NotFound desc = could not find container \"685d5d12d9f38938443fdb4dcc3696c39a0ad35ad8383ae6734bbf35ece92558\": container with ID starting with 685d5d12d9f38938443fdb4dcc3696c39a0ad35ad8383ae6734bbf35ece92558 not found: ID does not exist"
	Jul 19 18:20:32 addons-990895 kubelet[1270]: I0719 18:20:32.221854    1270 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46cb4a92-9a12-4ef8-8f9e-81f71fb2409b" path="/var/lib/kubelet/pods/46cb4a92-9a12-4ef8-8f9e-81f71fb2409b/volumes"
	Jul 19 18:20:50 addons-990895 kubelet[1270]: E0719 18:20:50.230066    1270 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 18:20:50 addons-990895 kubelet[1270]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 18:20:50 addons-990895 kubelet[1270]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 18:20:50 addons-990895 kubelet[1270]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 18:20:50 addons-990895 kubelet[1270]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 18:20:50 addons-990895 kubelet[1270]: I0719 18:20:50.747705    1270 scope.go:117] "RemoveContainer" containerID="8b13a25a266eb840848bcc2e3772733071b8d1af378e41742b75074b0905a4fd"
	Jul 19 18:20:50 addons-990895 kubelet[1270]: I0719 18:20:50.764711    1270 scope.go:117] "RemoveContainer" containerID="28a1189b89e09e8da4575632b354554ad147075ab68dadf2da971ff80afae288"
	Jul 19 18:21:50 addons-990895 kubelet[1270]: E0719 18:21:50.231202    1270 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 18:21:50 addons-990895 kubelet[1270]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 18:21:50 addons-990895 kubelet[1270]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 18:21:50 addons-990895 kubelet[1270]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 18:21:50 addons-990895 kubelet[1270]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 18:22:50 addons-990895 kubelet[1270]: E0719 18:22:50.229845    1270 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 18:22:50 addons-990895 kubelet[1270]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 18:22:50 addons-990895 kubelet[1270]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 18:22:50 addons-990895 kubelet[1270]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 18:22:50 addons-990895 kubelet[1270]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 18:23:06 addons-990895 kubelet[1270]: I0719 18:23:06.889817    1270 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/15b48dfb-deb0-43b5-9a04-9d6896e8c5da-tmp-dir\") pod \"15b48dfb-deb0-43b5-9a04-9d6896e8c5da\" (UID: \"15b48dfb-deb0-43b5-9a04-9d6896e8c5da\") "
	Jul 19 18:23:06 addons-990895 kubelet[1270]: I0719 18:23:06.889980    1270 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j69j\" (UniqueName: \"kubernetes.io/projected/15b48dfb-deb0-43b5-9a04-9d6896e8c5da-kube-api-access-7j69j\") pod \"15b48dfb-deb0-43b5-9a04-9d6896e8c5da\" (UID: \"15b48dfb-deb0-43b5-9a04-9d6896e8c5da\") "
	Jul 19 18:23:06 addons-990895 kubelet[1270]: I0719 18:23:06.890277    1270 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15b48dfb-deb0-43b5-9a04-9d6896e8c5da-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "15b48dfb-deb0-43b5-9a04-9d6896e8c5da" (UID: "15b48dfb-deb0-43b5-9a04-9d6896e8c5da"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jul 19 18:23:06 addons-990895 kubelet[1270]: I0719 18:23:06.900636    1270 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15b48dfb-deb0-43b5-9a04-9d6896e8c5da-kube-api-access-7j69j" (OuterVolumeSpecName: "kube-api-access-7j69j") pod "15b48dfb-deb0-43b5-9a04-9d6896e8c5da" (UID: "15b48dfb-deb0-43b5-9a04-9d6896e8c5da"). InnerVolumeSpecName "kube-api-access-7j69j". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 19 18:23:06 addons-990895 kubelet[1270]: I0719 18:23:06.990637    1270 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/15b48dfb-deb0-43b5-9a04-9d6896e8c5da-tmp-dir\") on node \"addons-990895\" DevicePath \"\""
	Jul 19 18:23:06 addons-990895 kubelet[1270]: I0719 18:23:06.990677    1270 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-7j69j\" (UniqueName: \"kubernetes.io/projected/15b48dfb-deb0-43b5-9a04-9d6896e8c5da-kube-api-access-7j69j\") on node \"addons-990895\" DevicePath \"\""
	
	
	==> storage-provisioner [20f6f7da0a8f0aedaa398a9f1c9576af2b52c98755486768b6f7b7fa91023439] <==
	I0719 18:16:11.148205       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 18:16:11.279460       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 18:16:11.279576       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0719 18:16:11.369147       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0719 18:16:11.389561       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-990895_636eaf15-a65e-4f5f-a735-33ad948d0fbe!
	I0719 18:16:11.371130       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"21af0ea0-5ada-4f86-9f13-08f4782e4c03", APIVersion:"v1", ResourceVersion:"637", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-990895_636eaf15-a65e-4f5f-a735-33ad948d0fbe became leader
	I0719 18:16:11.490436       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-990895_636eaf15-a65e-4f5f-a735-33ad948d0fbe!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-990895 -n addons-990895
helpers_test.go:261: (dbg) Run:  kubectl --context addons-990895 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-c59844bb4-mg572
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-990895 describe pod metrics-server-c59844bb4-mg572
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-990895 describe pod metrics-server-c59844bb4-mg572: exit status 1 (61.693691ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-c59844bb4-mg572" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-990895 describe pod metrics-server-c59844bb4-mg572: exit status 1
--- FAIL: TestAddons/parallel/MetricsServer (315.01s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.21s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-990895
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-990895: exit status 82 (2m0.452924487s)

                                                
                                                
-- stdout --
	* Stopping node "addons-990895"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-990895" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-990895
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-990895: exit status 11 (21.464606375s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.239:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-990895" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-990895
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-990895: exit status 11 (6.144220137s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.239:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-990895" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-990895
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-990895: exit status 11 (6.143060966s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.239:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-990895" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 node stop m02 -v=7 --alsologtostderr
E0719 18:35:24.486023   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/functional-788436/client.crt: no such file or directory
E0719 18:36:05.446483   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/functional-788436/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-269108 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.466035779s)

                                                
                                                
-- stdout --
	* Stopping node "ha-269108-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 18:35:14.770792   37164 out.go:291] Setting OutFile to fd 1 ...
	I0719 18:35:14.770944   37164 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:35:14.770955   37164 out.go:304] Setting ErrFile to fd 2...
	I0719 18:35:14.770962   37164 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:35:14.771148   37164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 18:35:14.771449   37164 mustload.go:65] Loading cluster: ha-269108
	I0719 18:35:14.771831   37164 config.go:182] Loaded profile config "ha-269108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:35:14.771875   37164 stop.go:39] StopHost: ha-269108-m02
	I0719 18:35:14.772246   37164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:35:14.772309   37164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:35:14.788264   37164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34209
	I0719 18:35:14.788673   37164 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:35:14.789217   37164 main.go:141] libmachine: Using API Version  1
	I0719 18:35:14.789242   37164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:35:14.789654   37164 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:35:14.792077   37164 out.go:177] * Stopping node "ha-269108-m02"  ...
	I0719 18:35:14.793415   37164 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0719 18:35:14.793442   37164 main.go:141] libmachine: (ha-269108-m02) Calling .DriverName
	I0719 18:35:14.793677   37164 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0719 18:35:14.793712   37164 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHHostname
	I0719 18:35:14.796665   37164 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:35:14.797083   37164 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:35:14.797116   37164 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:35:14.797261   37164 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHPort
	I0719 18:35:14.797432   37164 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:35:14.797586   37164 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHUsername
	I0719 18:35:14.797742   37164 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m02/id_rsa Username:docker}
	I0719 18:35:14.886379   37164 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0719 18:35:14.939078   37164 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0719 18:35:14.992826   37164 main.go:141] libmachine: Stopping "ha-269108-m02"...
	I0719 18:35:14.992850   37164 main.go:141] libmachine: (ha-269108-m02) Calling .GetState
	I0719 18:35:14.994596   37164 main.go:141] libmachine: (ha-269108-m02) Calling .Stop
	I0719 18:35:14.998102   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 0/120
	I0719 18:35:15.999397   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 1/120
	I0719 18:35:17.000914   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 2/120
	I0719 18:35:18.003148   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 3/120
	I0719 18:35:19.004402   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 4/120
	I0719 18:35:20.006101   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 5/120
	I0719 18:35:21.007852   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 6/120
	I0719 18:35:22.009259   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 7/120
	I0719 18:35:23.010845   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 8/120
	I0719 18:35:24.012301   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 9/120
	I0719 18:35:25.014215   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 10/120
	I0719 18:35:26.015881   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 11/120
	I0719 18:35:27.017212   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 12/120
	I0719 18:35:28.018575   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 13/120
	I0719 18:35:29.020028   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 14/120
	I0719 18:35:30.021779   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 15/120
	I0719 18:35:31.023403   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 16/120
	I0719 18:35:32.025135   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 17/120
	I0719 18:35:33.026435   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 18/120
	I0719 18:35:34.028092   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 19/120
	I0719 18:35:35.029635   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 20/120
	I0719 18:35:36.031163   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 21/120
	I0719 18:35:37.032270   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 22/120
	I0719 18:35:38.034425   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 23/120
	I0719 18:35:39.036272   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 24/120
	I0719 18:35:40.038039   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 25/120
	I0719 18:35:41.039190   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 26/120
	I0719 18:35:42.040772   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 27/120
	I0719 18:35:43.042759   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 28/120
	I0719 18:35:44.044199   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 29/120
	I0719 18:35:45.046719   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 30/120
	I0719 18:35:46.048088   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 31/120
	I0719 18:35:47.050415   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 32/120
	I0719 18:35:48.051907   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 33/120
	I0719 18:35:49.053529   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 34/120
	I0719 18:35:50.055124   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 35/120
	I0719 18:35:51.056406   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 36/120
	I0719 18:35:52.058364   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 37/120
	I0719 18:35:53.059500   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 38/120
	I0719 18:35:54.060825   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 39/120
	I0719 18:35:55.062730   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 40/120
	I0719 18:35:56.064269   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 41/120
	I0719 18:35:57.066060   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 42/120
	I0719 18:35:58.067728   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 43/120
	I0719 18:35:59.069344   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 44/120
	I0719 18:36:00.071101   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 45/120
	I0719 18:36:01.072976   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 46/120
	I0719 18:36:02.074358   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 47/120
	I0719 18:36:03.076420   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 48/120
	I0719 18:36:04.078896   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 49/120
	I0719 18:36:05.080891   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 50/120
	I0719 18:36:06.082127   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 51/120
	I0719 18:36:07.083385   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 52/120
	I0719 18:36:08.084883   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 53/120
	I0719 18:36:09.086330   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 54/120
	I0719 18:36:10.088154   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 55/120
	I0719 18:36:11.090269   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 56/120
	I0719 18:36:12.091656   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 57/120
	I0719 18:36:13.093263   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 58/120
	I0719 18:36:14.094650   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 59/120
	I0719 18:36:15.096743   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 60/120
	I0719 18:36:16.098920   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 61/120
	I0719 18:36:17.100693   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 62/120
	I0719 18:36:18.103119   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 63/120
	I0719 18:36:19.105265   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 64/120
	I0719 18:36:20.107051   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 65/120
	I0719 18:36:21.108358   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 66/120
	I0719 18:36:22.109766   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 67/120
	I0719 18:36:23.111488   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 68/120
	I0719 18:36:24.112743   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 69/120
	I0719 18:36:25.114617   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 70/120
	I0719 18:36:26.116295   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 71/120
	I0719 18:36:27.118703   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 72/120
	I0719 18:36:28.120298   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 73/120
	I0719 18:36:29.122725   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 74/120
	I0719 18:36:30.123993   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 75/120
	I0719 18:36:31.126260   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 76/120
	I0719 18:36:32.127703   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 77/120
	I0719 18:36:33.129062   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 78/120
	I0719 18:36:34.130367   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 79/120
	I0719 18:36:35.132808   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 80/120
	I0719 18:36:36.133978   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 81/120
	I0719 18:36:37.135262   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 82/120
	I0719 18:36:38.136519   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 83/120
	I0719 18:36:39.137910   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 84/120
	I0719 18:36:40.139817   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 85/120
	I0719 18:36:41.141547   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 86/120
	I0719 18:36:42.142663   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 87/120
	I0719 18:36:43.144698   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 88/120
	I0719 18:36:44.145914   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 89/120
	I0719 18:36:45.147811   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 90/120
	I0719 18:36:46.149070   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 91/120
	I0719 18:36:47.150231   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 92/120
	I0719 18:36:48.151488   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 93/120
	I0719 18:36:49.153352   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 94/120
	I0719 18:36:50.155134   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 95/120
	I0719 18:36:51.156653   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 96/120
	I0719 18:36:52.158503   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 97/120
	I0719 18:36:53.159924   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 98/120
	I0719 18:36:54.161071   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 99/120
	I0719 18:36:55.163257   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 100/120
	I0719 18:36:56.165052   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 101/120
	I0719 18:36:57.166450   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 102/120
	I0719 18:36:58.167878   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 103/120
	I0719 18:36:59.169494   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 104/120
	I0719 18:37:00.171301   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 105/120
	I0719 18:37:01.172705   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 106/120
	I0719 18:37:02.174095   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 107/120
	I0719 18:37:03.175452   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 108/120
	I0719 18:37:04.176924   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 109/120
	I0719 18:37:05.179271   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 110/120
	I0719 18:37:06.180633   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 111/120
	I0719 18:37:07.182135   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 112/120
	I0719 18:37:08.183556   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 113/120
	I0719 18:37:09.185833   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 114/120
	I0719 18:37:10.187755   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 115/120
	I0719 18:37:11.189109   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 116/120
	I0719 18:37:12.190656   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 117/120
	I0719 18:37:13.191970   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 118/120
	I0719 18:37:14.194292   37164 main.go:141] libmachine: (ha-269108-m02) Waiting for machine to stop 119/120
	I0719 18:37:15.195309   37164 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0719 18:37:15.195513   37164 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-269108 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 status -v=7 --alsologtostderr
E0719 18:37:27.367981   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/functional-788436/client.crt: no such file or directory
E0719 18:37:32.426117   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-269108 status -v=7 --alsologtostderr: exit status 3 (19.113753623s)

                                                
                                                
-- stdout --
	ha-269108
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-269108-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-269108-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-269108-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 18:37:15.238088   37621 out.go:291] Setting OutFile to fd 1 ...
	I0719 18:37:15.238617   37621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:37:15.238635   37621 out.go:304] Setting ErrFile to fd 2...
	I0719 18:37:15.238642   37621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:37:15.239122   37621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 18:37:15.239599   37621 out.go:298] Setting JSON to false
	I0719 18:37:15.239638   37621 mustload.go:65] Loading cluster: ha-269108
	I0719 18:37:15.239727   37621 notify.go:220] Checking for updates...
	I0719 18:37:15.240103   37621 config.go:182] Loaded profile config "ha-269108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:37:15.240123   37621 status.go:255] checking status of ha-269108 ...
	I0719 18:37:15.240564   37621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:15.240613   37621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:15.256352   37621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38089
	I0719 18:37:15.256753   37621 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:15.257458   37621 main.go:141] libmachine: Using API Version  1
	I0719 18:37:15.257485   37621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:15.257800   37621 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:15.258005   37621 main.go:141] libmachine: (ha-269108) Calling .GetState
	I0719 18:37:15.259553   37621 status.go:330] ha-269108 host status = "Running" (err=<nil>)
	I0719 18:37:15.259568   37621 host.go:66] Checking if "ha-269108" exists ...
	I0719 18:37:15.259858   37621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:15.259892   37621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:15.275325   37621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41459
	I0719 18:37:15.275736   37621 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:15.276187   37621 main.go:141] libmachine: Using API Version  1
	I0719 18:37:15.276229   37621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:15.276726   37621 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:15.276927   37621 main.go:141] libmachine: (ha-269108) Calling .GetIP
	I0719 18:37:15.279940   37621 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:37:15.280357   37621 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:37:15.280387   37621 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:37:15.280553   37621 host.go:66] Checking if "ha-269108" exists ...
	I0719 18:37:15.280870   37621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:15.280914   37621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:15.294985   37621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44983
	I0719 18:37:15.295365   37621 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:15.295827   37621 main.go:141] libmachine: Using API Version  1
	I0719 18:37:15.295866   37621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:15.296143   37621 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:15.296338   37621 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:37:15.296516   37621 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 18:37:15.296562   37621 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:37:15.299259   37621 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:37:15.299693   37621 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:37:15.299719   37621 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:37:15.299881   37621 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:37:15.300087   37621 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:37:15.300227   37621 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:37:15.300353   37621 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	I0719 18:37:15.381628   37621 ssh_runner.go:195] Run: systemctl --version
	I0719 18:37:15.388950   37621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 18:37:15.405937   37621 kubeconfig.go:125] found "ha-269108" server: "https://192.168.39.254:8443"
	I0719 18:37:15.405968   37621 api_server.go:166] Checking apiserver status ...
	I0719 18:37:15.406006   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 18:37:15.421058   37621 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1198/cgroup
	W0719 18:37:15.432652   37621 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1198/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 18:37:15.432709   37621 ssh_runner.go:195] Run: ls
	I0719 18:37:15.437455   37621 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 18:37:15.441759   37621 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 18:37:15.441779   37621 status.go:422] ha-269108 apiserver status = Running (err=<nil>)
	I0719 18:37:15.441787   37621 status.go:257] ha-269108 status: &{Name:ha-269108 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 18:37:15.441803   37621 status.go:255] checking status of ha-269108-m02 ...
	I0719 18:37:15.442095   37621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:15.442127   37621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:15.457118   37621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46501
	I0719 18:37:15.457551   37621 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:15.458054   37621 main.go:141] libmachine: Using API Version  1
	I0719 18:37:15.458067   37621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:15.458399   37621 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:15.458598   37621 main.go:141] libmachine: (ha-269108-m02) Calling .GetState
	I0719 18:37:15.460269   37621 status.go:330] ha-269108-m02 host status = "Running" (err=<nil>)
	I0719 18:37:15.460286   37621 host.go:66] Checking if "ha-269108-m02" exists ...
	I0719 18:37:15.460556   37621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:15.460600   37621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:15.475411   37621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38801
	I0719 18:37:15.475793   37621 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:15.476255   37621 main.go:141] libmachine: Using API Version  1
	I0719 18:37:15.476276   37621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:15.476576   37621 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:15.476746   37621 main.go:141] libmachine: (ha-269108-m02) Calling .GetIP
	I0719 18:37:15.479521   37621 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:37:15.480019   37621 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:37:15.480048   37621 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:37:15.480237   37621 host.go:66] Checking if "ha-269108-m02" exists ...
	I0719 18:37:15.480621   37621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:15.480663   37621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:15.495197   37621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42255
	I0719 18:37:15.495586   37621 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:15.496101   37621 main.go:141] libmachine: Using API Version  1
	I0719 18:37:15.496122   37621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:15.496470   37621 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:15.496641   37621 main.go:141] libmachine: (ha-269108-m02) Calling .DriverName
	I0719 18:37:15.496802   37621 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 18:37:15.496817   37621 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHHostname
	I0719 18:37:15.499382   37621 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:37:15.499770   37621 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:37:15.499797   37621 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:37:15.499922   37621 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHPort
	I0719 18:37:15.500087   37621 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:37:15.500230   37621 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHUsername
	I0719 18:37:15.500365   37621 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m02/id_rsa Username:docker}
	W0719 18:37:33.952035   37621 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.87:22: connect: no route to host
	W0719 18:37:33.952172   37621 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.87:22: connect: no route to host
	E0719 18:37:33.952195   37621 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.87:22: connect: no route to host
	I0719 18:37:33.952210   37621 status.go:257] ha-269108-m02 status: &{Name:ha-269108-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0719 18:37:33.952241   37621 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.87:22: connect: no route to host
	I0719 18:37:33.952260   37621 status.go:255] checking status of ha-269108-m03 ...
	I0719 18:37:33.952585   37621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:33.952633   37621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:33.968819   37621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45089
	I0719 18:37:33.969225   37621 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:33.969687   37621 main.go:141] libmachine: Using API Version  1
	I0719 18:37:33.969709   37621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:33.969986   37621 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:33.970213   37621 main.go:141] libmachine: (ha-269108-m03) Calling .GetState
	I0719 18:37:33.971615   37621 status.go:330] ha-269108-m03 host status = "Running" (err=<nil>)
	I0719 18:37:33.971632   37621 host.go:66] Checking if "ha-269108-m03" exists ...
	I0719 18:37:33.972012   37621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:33.972046   37621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:33.986109   37621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37307
	I0719 18:37:33.986520   37621 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:33.986986   37621 main.go:141] libmachine: Using API Version  1
	I0719 18:37:33.987015   37621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:33.987389   37621 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:33.987575   37621 main.go:141] libmachine: (ha-269108-m03) Calling .GetIP
	I0719 18:37:33.990255   37621 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:37:33.990615   37621 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:37:33.990644   37621 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:37:33.990737   37621 host.go:66] Checking if "ha-269108-m03" exists ...
	I0719 18:37:33.991143   37621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:33.991188   37621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:34.004912   37621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40689
	I0719 18:37:34.005312   37621 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:34.005740   37621 main.go:141] libmachine: Using API Version  1
	I0719 18:37:34.005754   37621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:34.006094   37621 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:34.006392   37621 main.go:141] libmachine: (ha-269108-m03) Calling .DriverName
	I0719 18:37:34.006586   37621 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 18:37:34.006602   37621 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHHostname
	I0719 18:37:34.009547   37621 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:37:34.010050   37621 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:37:34.010082   37621 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:37:34.010231   37621 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHPort
	I0719 18:37:34.010390   37621 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:37:34.010558   37621 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHUsername
	I0719 18:37:34.010710   37621 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m03/id_rsa Username:docker}
	I0719 18:37:34.096112   37621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 18:37:34.114661   37621 kubeconfig.go:125] found "ha-269108" server: "https://192.168.39.254:8443"
	I0719 18:37:34.114689   37621 api_server.go:166] Checking apiserver status ...
	I0719 18:37:34.114724   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 18:37:34.130532   37621 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1542/cgroup
	W0719 18:37:34.139926   37621 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1542/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 18:37:34.139969   37621 ssh_runner.go:195] Run: ls
	I0719 18:37:34.144038   37621 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 18:37:34.148216   37621 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 18:37:34.148236   37621 status.go:422] ha-269108-m03 apiserver status = Running (err=<nil>)
	I0719 18:37:34.148244   37621 status.go:257] ha-269108-m03 status: &{Name:ha-269108-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 18:37:34.148257   37621 status.go:255] checking status of ha-269108-m04 ...
	I0719 18:37:34.148534   37621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:34.148572   37621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:34.162858   37621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38375
	I0719 18:37:34.163250   37621 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:34.163678   37621 main.go:141] libmachine: Using API Version  1
	I0719 18:37:34.163698   37621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:34.164000   37621 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:34.164185   37621 main.go:141] libmachine: (ha-269108-m04) Calling .GetState
	I0719 18:37:34.165819   37621 status.go:330] ha-269108-m04 host status = "Running" (err=<nil>)
	I0719 18:37:34.165837   37621 host.go:66] Checking if "ha-269108-m04" exists ...
	I0719 18:37:34.166122   37621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:34.166159   37621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:34.180630   37621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41991
	I0719 18:37:34.181006   37621 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:34.181453   37621 main.go:141] libmachine: Using API Version  1
	I0719 18:37:34.181472   37621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:34.181775   37621 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:34.181990   37621 main.go:141] libmachine: (ha-269108-m04) Calling .GetIP
	I0719 18:37:34.184783   37621 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:37:34.185285   37621 main.go:141] libmachine: (ha-269108-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:c4:15", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:34:19 +0000 UTC Type:0 Mac:52:54:00:44:c4:15 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-269108-m04 Clientid:01:52:54:00:44:c4:15}
	I0719 18:37:34.185309   37621 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:37:34.185429   37621 host.go:66] Checking if "ha-269108-m04" exists ...
	I0719 18:37:34.185700   37621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:34.185734   37621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:34.200106   37621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41173
	I0719 18:37:34.200594   37621 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:34.201094   37621 main.go:141] libmachine: Using API Version  1
	I0719 18:37:34.201119   37621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:34.201432   37621 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:34.201631   37621 main.go:141] libmachine: (ha-269108-m04) Calling .DriverName
	I0719 18:37:34.201817   37621 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 18:37:34.201837   37621 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHHostname
	I0719 18:37:34.204692   37621 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:37:34.205087   37621 main.go:141] libmachine: (ha-269108-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:c4:15", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:34:19 +0000 UTC Type:0 Mac:52:54:00:44:c4:15 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-269108-m04 Clientid:01:52:54:00:44:c4:15}
	I0719 18:37:34.205117   37621 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:37:34.205244   37621 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHPort
	I0719 18:37:34.205438   37621 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHKeyPath
	I0719 18:37:34.205590   37621 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHUsername
	I0719 18:37:34.205793   37621 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m04/id_rsa Username:docker}
	I0719 18:37:34.291932   37621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 18:37:34.308537   37621 status.go:257] ha-269108-m04 status: &{Name:ha-269108-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-269108 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-269108 -n ha-269108
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-269108 logs -n 25: (1.310033382s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-269108 cp ha-269108-m03:/home/docker/cp-test.txt                             | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile835163817/001/cp-test_ha-269108-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n                                                                | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-269108 cp ha-269108-m03:/home/docker/cp-test.txt                             | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108:/home/docker/cp-test_ha-269108-m03_ha-269108.txt                      |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n                                                                | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n ha-269108 sudo cat                                             | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | /home/docker/cp-test_ha-269108-m03_ha-269108.txt                                |           |         |         |                     |                     |
	| cp      | ha-269108 cp ha-269108-m03:/home/docker/cp-test.txt                             | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m02:/home/docker/cp-test_ha-269108-m03_ha-269108-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n                                                                | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n ha-269108-m02 sudo cat                                         | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | /home/docker/cp-test_ha-269108-m03_ha-269108-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-269108 cp ha-269108-m03:/home/docker/cp-test.txt                             | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m04:/home/docker/cp-test_ha-269108-m03_ha-269108-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n                                                                | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n ha-269108-m04 sudo cat                                         | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | /home/docker/cp-test_ha-269108-m03_ha-269108-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-269108 cp testdata/cp-test.txt                                               | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n                                                                | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-269108 cp ha-269108-m04:/home/docker/cp-test.txt                             | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile835163817/001/cp-test_ha-269108-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n                                                                | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-269108 cp ha-269108-m04:/home/docker/cp-test.txt                             | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108:/home/docker/cp-test_ha-269108-m04_ha-269108.txt                      |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n                                                                | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n ha-269108 sudo cat                                             | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | /home/docker/cp-test_ha-269108-m04_ha-269108.txt                                |           |         |         |                     |                     |
	| cp      | ha-269108 cp ha-269108-m04:/home/docker/cp-test.txt                             | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m02:/home/docker/cp-test_ha-269108-m04_ha-269108-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n                                                                | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n ha-269108-m02 sudo cat                                         | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | /home/docker/cp-test_ha-269108-m04_ha-269108-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-269108 cp ha-269108-m04:/home/docker/cp-test.txt                             | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m03:/home/docker/cp-test_ha-269108-m04_ha-269108-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n                                                                | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n ha-269108-m03 sudo cat                                         | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | /home/docker/cp-test_ha-269108-m04_ha-269108-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-269108 node stop m02 -v=7                                                    | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 18:30:31
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 18:30:31.420743   33150 out.go:291] Setting OutFile to fd 1 ...
	I0719 18:30:31.420877   33150 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:30:31.420887   33150 out.go:304] Setting ErrFile to fd 2...
	I0719 18:30:31.420893   33150 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:30:31.421086   33150 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 18:30:31.421655   33150 out.go:298] Setting JSON to false
	I0719 18:30:31.422488   33150 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4374,"bootTime":1721409457,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 18:30:31.422545   33150 start.go:139] virtualization: kvm guest
	I0719 18:30:31.424466   33150 out.go:177] * [ha-269108] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 18:30:31.425625   33150 notify.go:220] Checking for updates...
	I0719 18:30:31.425654   33150 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 18:30:31.426849   33150 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 18:30:31.427995   33150 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 18:30:31.429188   33150 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 18:30:31.430409   33150 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 18:30:31.431696   33150 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 18:30:31.433097   33150 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 18:30:31.466582   33150 out.go:177] * Using the kvm2 driver based on user configuration
	I0719 18:30:31.467578   33150 start.go:297] selected driver: kvm2
	I0719 18:30:31.467592   33150 start.go:901] validating driver "kvm2" against <nil>
	I0719 18:30:31.467605   33150 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 18:30:31.468277   33150 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 18:30:31.468355   33150 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19307-14841/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 18:30:31.483111   33150 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 18:30:31.483160   33150 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 18:30:31.483364   33150 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 18:30:31.483428   33150 cni.go:84] Creating CNI manager for ""
	I0719 18:30:31.483440   33150 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0719 18:30:31.483445   33150 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0719 18:30:31.483529   33150 start.go:340] cluster config:
	{Name:ha-269108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-269108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0719 18:30:31.483624   33150 iso.go:125] acquiring lock: {Name:mka126a3710ab8a115eb2a3299b735524430e5f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 18:30:31.485254   33150 out.go:177] * Starting "ha-269108" primary control-plane node in "ha-269108" cluster
	I0719 18:30:31.486231   33150 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 18:30:31.486260   33150 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 18:30:31.486268   33150 cache.go:56] Caching tarball of preloaded images
	I0719 18:30:31.486353   33150 preload.go:172] Found /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 18:30:31.486365   33150 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 18:30:31.486701   33150 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/config.json ...
	I0719 18:30:31.486730   33150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/config.json: {Name:mk289cdb38a1e07701b9aa5db837f263a01da0ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:30:31.486879   33150 start.go:360] acquireMachinesLock for ha-269108: {Name:mk45aeee801587237bb965fa260501e9fac6f233 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 18:30:31.486913   33150 start.go:364] duration metric: took 18.469µs to acquireMachinesLock for "ha-269108"
	I0719 18:30:31.486935   33150 start.go:93] Provisioning new machine with config: &{Name:ha-269108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-269108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 18:30:31.486990   33150 start.go:125] createHost starting for "" (driver="kvm2")
	I0719 18:30:31.488436   33150 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 18:30:31.488586   33150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:30:31.488625   33150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:30:31.502690   33150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36883
	I0719 18:30:31.503132   33150 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:30:31.503625   33150 main.go:141] libmachine: Using API Version  1
	I0719 18:30:31.503649   33150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:30:31.503997   33150 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:30:31.504137   33150 main.go:141] libmachine: (ha-269108) Calling .GetMachineName
	I0719 18:30:31.504279   33150 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:30:31.504420   33150 start.go:159] libmachine.API.Create for "ha-269108" (driver="kvm2")
	I0719 18:30:31.504442   33150 client.go:168] LocalClient.Create starting
	I0719 18:30:31.504471   33150 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem
	I0719 18:30:31.504500   33150 main.go:141] libmachine: Decoding PEM data...
	I0719 18:30:31.504512   33150 main.go:141] libmachine: Parsing certificate...
	I0719 18:30:31.504558   33150 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem
	I0719 18:30:31.504575   33150 main.go:141] libmachine: Decoding PEM data...
	I0719 18:30:31.504586   33150 main.go:141] libmachine: Parsing certificate...
	I0719 18:30:31.504600   33150 main.go:141] libmachine: Running pre-create checks...
	I0719 18:30:31.504608   33150 main.go:141] libmachine: (ha-269108) Calling .PreCreateCheck
	I0719 18:30:31.504898   33150 main.go:141] libmachine: (ha-269108) Calling .GetConfigRaw
	I0719 18:30:31.505239   33150 main.go:141] libmachine: Creating machine...
	I0719 18:30:31.505250   33150 main.go:141] libmachine: (ha-269108) Calling .Create
	I0719 18:30:31.505357   33150 main.go:141] libmachine: (ha-269108) Creating KVM machine...
	I0719 18:30:31.506439   33150 main.go:141] libmachine: (ha-269108) DBG | found existing default KVM network
	I0719 18:30:31.507078   33150 main.go:141] libmachine: (ha-269108) DBG | I0719 18:30:31.506961   33173 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0719 18:30:31.507122   33150 main.go:141] libmachine: (ha-269108) DBG | created network xml: 
	I0719 18:30:31.507137   33150 main.go:141] libmachine: (ha-269108) DBG | <network>
	I0719 18:30:31.507146   33150 main.go:141] libmachine: (ha-269108) DBG |   <name>mk-ha-269108</name>
	I0719 18:30:31.507169   33150 main.go:141] libmachine: (ha-269108) DBG |   <dns enable='no'/>
	I0719 18:30:31.507181   33150 main.go:141] libmachine: (ha-269108) DBG |   
	I0719 18:30:31.507189   33150 main.go:141] libmachine: (ha-269108) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0719 18:30:31.507199   33150 main.go:141] libmachine: (ha-269108) DBG |     <dhcp>
	I0719 18:30:31.507211   33150 main.go:141] libmachine: (ha-269108) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0719 18:30:31.507222   33150 main.go:141] libmachine: (ha-269108) DBG |     </dhcp>
	I0719 18:30:31.507241   33150 main.go:141] libmachine: (ha-269108) DBG |   </ip>
	I0719 18:30:31.507250   33150 main.go:141] libmachine: (ha-269108) DBG |   
	I0719 18:30:31.507259   33150 main.go:141] libmachine: (ha-269108) DBG | </network>
	I0719 18:30:31.507271   33150 main.go:141] libmachine: (ha-269108) DBG | 
	I0719 18:30:31.511990   33150 main.go:141] libmachine: (ha-269108) DBG | trying to create private KVM network mk-ha-269108 192.168.39.0/24...
	I0719 18:30:31.573689   33150 main.go:141] libmachine: (ha-269108) Setting up store path in /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108 ...
	I0719 18:30:31.573720   33150 main.go:141] libmachine: (ha-269108) Building disk image from file:///home/jenkins/minikube-integration/19307-14841/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 18:30:31.573733   33150 main.go:141] libmachine: (ha-269108) DBG | private KVM network mk-ha-269108 192.168.39.0/24 created
	I0719 18:30:31.573782   33150 main.go:141] libmachine: (ha-269108) DBG | I0719 18:30:31.573516   33173 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 18:30:31.573811   33150 main.go:141] libmachine: (ha-269108) Downloading /home/jenkins/minikube-integration/19307-14841/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19307-14841/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 18:30:31.802036   33150 main.go:141] libmachine: (ha-269108) DBG | I0719 18:30:31.801927   33173 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa...
	I0719 18:30:31.964243   33150 main.go:141] libmachine: (ha-269108) DBG | I0719 18:30:31.964084   33173 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/ha-269108.rawdisk...
	I0719 18:30:31.964271   33150 main.go:141] libmachine: (ha-269108) DBG | Writing magic tar header
	I0719 18:30:31.964362   33150 main.go:141] libmachine: (ha-269108) DBG | Writing SSH key tar header
	I0719 18:30:31.964404   33150 main.go:141] libmachine: (ha-269108) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108 (perms=drwx------)
	I0719 18:30:31.964421   33150 main.go:141] libmachine: (ha-269108) DBG | I0719 18:30:31.964199   33173 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108 ...
	I0719 18:30:31.964430   33150 main.go:141] libmachine: (ha-269108) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841/.minikube/machines (perms=drwxr-xr-x)
	I0719 18:30:31.964444   33150 main.go:141] libmachine: (ha-269108) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841/.minikube (perms=drwxr-xr-x)
	I0719 18:30:31.964450   33150 main.go:141] libmachine: (ha-269108) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841 (perms=drwxrwxr-x)
	I0719 18:30:31.964457   33150 main.go:141] libmachine: (ha-269108) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108
	I0719 18:30:31.964466   33150 main.go:141] libmachine: (ha-269108) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841/.minikube/machines
	I0719 18:30:31.964474   33150 main.go:141] libmachine: (ha-269108) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 18:30:31.964482   33150 main.go:141] libmachine: (ha-269108) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841
	I0719 18:30:31.964489   33150 main.go:141] libmachine: (ha-269108) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0719 18:30:31.964495   33150 main.go:141] libmachine: (ha-269108) DBG | Checking permissions on dir: /home/jenkins
	I0719 18:30:31.964502   33150 main.go:141] libmachine: (ha-269108) DBG | Checking permissions on dir: /home
	I0719 18:30:31.964508   33150 main.go:141] libmachine: (ha-269108) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0719 18:30:31.964521   33150 main.go:141] libmachine: (ha-269108) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0719 18:30:31.964529   33150 main.go:141] libmachine: (ha-269108) Creating domain...
	I0719 18:30:31.964534   33150 main.go:141] libmachine: (ha-269108) DBG | Skipping /home - not owner
	I0719 18:30:31.965495   33150 main.go:141] libmachine: (ha-269108) define libvirt domain using xml: 
	I0719 18:30:31.965508   33150 main.go:141] libmachine: (ha-269108) <domain type='kvm'>
	I0719 18:30:31.965513   33150 main.go:141] libmachine: (ha-269108)   <name>ha-269108</name>
	I0719 18:30:31.965518   33150 main.go:141] libmachine: (ha-269108)   <memory unit='MiB'>2200</memory>
	I0719 18:30:31.965523   33150 main.go:141] libmachine: (ha-269108)   <vcpu>2</vcpu>
	I0719 18:30:31.965527   33150 main.go:141] libmachine: (ha-269108)   <features>
	I0719 18:30:31.965532   33150 main.go:141] libmachine: (ha-269108)     <acpi/>
	I0719 18:30:31.965536   33150 main.go:141] libmachine: (ha-269108)     <apic/>
	I0719 18:30:31.965541   33150 main.go:141] libmachine: (ha-269108)     <pae/>
	I0719 18:30:31.965547   33150 main.go:141] libmachine: (ha-269108)     
	I0719 18:30:31.965552   33150 main.go:141] libmachine: (ha-269108)   </features>
	I0719 18:30:31.965561   33150 main.go:141] libmachine: (ha-269108)   <cpu mode='host-passthrough'>
	I0719 18:30:31.965565   33150 main.go:141] libmachine: (ha-269108)   
	I0719 18:30:31.965571   33150 main.go:141] libmachine: (ha-269108)   </cpu>
	I0719 18:30:31.965576   33150 main.go:141] libmachine: (ha-269108)   <os>
	I0719 18:30:31.965580   33150 main.go:141] libmachine: (ha-269108)     <type>hvm</type>
	I0719 18:30:31.965586   33150 main.go:141] libmachine: (ha-269108)     <boot dev='cdrom'/>
	I0719 18:30:31.965597   33150 main.go:141] libmachine: (ha-269108)     <boot dev='hd'/>
	I0719 18:30:31.965603   33150 main.go:141] libmachine: (ha-269108)     <bootmenu enable='no'/>
	I0719 18:30:31.965607   33150 main.go:141] libmachine: (ha-269108)   </os>
	I0719 18:30:31.965612   33150 main.go:141] libmachine: (ha-269108)   <devices>
	I0719 18:30:31.965619   33150 main.go:141] libmachine: (ha-269108)     <disk type='file' device='cdrom'>
	I0719 18:30:31.965626   33150 main.go:141] libmachine: (ha-269108)       <source file='/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/boot2docker.iso'/>
	I0719 18:30:31.965633   33150 main.go:141] libmachine: (ha-269108)       <target dev='hdc' bus='scsi'/>
	I0719 18:30:31.965639   33150 main.go:141] libmachine: (ha-269108)       <readonly/>
	I0719 18:30:31.965651   33150 main.go:141] libmachine: (ha-269108)     </disk>
	I0719 18:30:31.965663   33150 main.go:141] libmachine: (ha-269108)     <disk type='file' device='disk'>
	I0719 18:30:31.965674   33150 main.go:141] libmachine: (ha-269108)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0719 18:30:31.965687   33150 main.go:141] libmachine: (ha-269108)       <source file='/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/ha-269108.rawdisk'/>
	I0719 18:30:31.965694   33150 main.go:141] libmachine: (ha-269108)       <target dev='hda' bus='virtio'/>
	I0719 18:30:31.965699   33150 main.go:141] libmachine: (ha-269108)     </disk>
	I0719 18:30:31.965703   33150 main.go:141] libmachine: (ha-269108)     <interface type='network'>
	I0719 18:30:31.965711   33150 main.go:141] libmachine: (ha-269108)       <source network='mk-ha-269108'/>
	I0719 18:30:31.965716   33150 main.go:141] libmachine: (ha-269108)       <model type='virtio'/>
	I0719 18:30:31.965722   33150 main.go:141] libmachine: (ha-269108)     </interface>
	I0719 18:30:31.965728   33150 main.go:141] libmachine: (ha-269108)     <interface type='network'>
	I0719 18:30:31.965740   33150 main.go:141] libmachine: (ha-269108)       <source network='default'/>
	I0719 18:30:31.965752   33150 main.go:141] libmachine: (ha-269108)       <model type='virtio'/>
	I0719 18:30:31.965769   33150 main.go:141] libmachine: (ha-269108)     </interface>
	I0719 18:30:31.965779   33150 main.go:141] libmachine: (ha-269108)     <serial type='pty'>
	I0719 18:30:31.965784   33150 main.go:141] libmachine: (ha-269108)       <target port='0'/>
	I0719 18:30:31.965791   33150 main.go:141] libmachine: (ha-269108)     </serial>
	I0719 18:30:31.965796   33150 main.go:141] libmachine: (ha-269108)     <console type='pty'>
	I0719 18:30:31.965801   33150 main.go:141] libmachine: (ha-269108)       <target type='serial' port='0'/>
	I0719 18:30:31.965811   33150 main.go:141] libmachine: (ha-269108)     </console>
	I0719 18:30:31.965818   33150 main.go:141] libmachine: (ha-269108)     <rng model='virtio'>
	I0719 18:30:31.965825   33150 main.go:141] libmachine: (ha-269108)       <backend model='random'>/dev/random</backend>
	I0719 18:30:31.965831   33150 main.go:141] libmachine: (ha-269108)     </rng>
	I0719 18:30:31.965836   33150 main.go:141] libmachine: (ha-269108)     
	I0719 18:30:31.965842   33150 main.go:141] libmachine: (ha-269108)     
	I0719 18:30:31.965846   33150 main.go:141] libmachine: (ha-269108)   </devices>
	I0719 18:30:31.965856   33150 main.go:141] libmachine: (ha-269108) </domain>
	I0719 18:30:31.965862   33150 main.go:141] libmachine: (ha-269108) 
	I0719 18:30:31.969928   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:ae:31:4c in network default
	I0719 18:30:31.970473   33150 main.go:141] libmachine: (ha-269108) Ensuring networks are active...
	I0719 18:30:31.970490   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:31.971060   33150 main.go:141] libmachine: (ha-269108) Ensuring network default is active
	I0719 18:30:31.971367   33150 main.go:141] libmachine: (ha-269108) Ensuring network mk-ha-269108 is active
	I0719 18:30:31.971779   33150 main.go:141] libmachine: (ha-269108) Getting domain xml...
	I0719 18:30:31.972427   33150 main.go:141] libmachine: (ha-269108) Creating domain...
	I0719 18:30:33.130639   33150 main.go:141] libmachine: (ha-269108) Waiting to get IP...
	I0719 18:30:33.131510   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:33.131926   33150 main.go:141] libmachine: (ha-269108) DBG | unable to find current IP address of domain ha-269108 in network mk-ha-269108
	I0719 18:30:33.131947   33150 main.go:141] libmachine: (ha-269108) DBG | I0719 18:30:33.131911   33173 retry.go:31] will retry after 252.599588ms: waiting for machine to come up
	I0719 18:30:33.386424   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:33.386825   33150 main.go:141] libmachine: (ha-269108) DBG | unable to find current IP address of domain ha-269108 in network mk-ha-269108
	I0719 18:30:33.386854   33150 main.go:141] libmachine: (ha-269108) DBG | I0719 18:30:33.386776   33173 retry.go:31] will retry after 352.1385ms: waiting for machine to come up
	I0719 18:30:33.740021   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:33.740420   33150 main.go:141] libmachine: (ha-269108) DBG | unable to find current IP address of domain ha-269108 in network mk-ha-269108
	I0719 18:30:33.740449   33150 main.go:141] libmachine: (ha-269108) DBG | I0719 18:30:33.740367   33173 retry.go:31] will retry after 303.509496ms: waiting for machine to come up
	I0719 18:30:34.045706   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:34.046159   33150 main.go:141] libmachine: (ha-269108) DBG | unable to find current IP address of domain ha-269108 in network mk-ha-269108
	I0719 18:30:34.046206   33150 main.go:141] libmachine: (ha-269108) DBG | I0719 18:30:34.046109   33173 retry.go:31] will retry after 567.315ms: waiting for machine to come up
	I0719 18:30:34.614510   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:34.614895   33150 main.go:141] libmachine: (ha-269108) DBG | unable to find current IP address of domain ha-269108 in network mk-ha-269108
	I0719 18:30:34.614923   33150 main.go:141] libmachine: (ha-269108) DBG | I0719 18:30:34.614842   33173 retry.go:31] will retry after 539.769716ms: waiting for machine to come up
	I0719 18:30:35.156500   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:35.156841   33150 main.go:141] libmachine: (ha-269108) DBG | unable to find current IP address of domain ha-269108 in network mk-ha-269108
	I0719 18:30:35.156866   33150 main.go:141] libmachine: (ha-269108) DBG | I0719 18:30:35.156795   33173 retry.go:31] will retry after 784.992453ms: waiting for machine to come up
	I0719 18:30:35.943646   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:35.944122   33150 main.go:141] libmachine: (ha-269108) DBG | unable to find current IP address of domain ha-269108 in network mk-ha-269108
	I0719 18:30:35.944160   33150 main.go:141] libmachine: (ha-269108) DBG | I0719 18:30:35.944088   33173 retry.go:31] will retry after 823.565264ms: waiting for machine to come up
	I0719 18:30:36.768679   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:36.769041   33150 main.go:141] libmachine: (ha-269108) DBG | unable to find current IP address of domain ha-269108 in network mk-ha-269108
	I0719 18:30:36.769065   33150 main.go:141] libmachine: (ha-269108) DBG | I0719 18:30:36.769004   33173 retry.go:31] will retry after 985.011439ms: waiting for machine to come up
	I0719 18:30:37.754996   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:37.755650   33150 main.go:141] libmachine: (ha-269108) DBG | unable to find current IP address of domain ha-269108 in network mk-ha-269108
	I0719 18:30:37.755679   33150 main.go:141] libmachine: (ha-269108) DBG | I0719 18:30:37.755600   33173 retry.go:31] will retry after 1.801933856s: waiting for machine to come up
	I0719 18:30:39.559512   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:39.559930   33150 main.go:141] libmachine: (ha-269108) DBG | unable to find current IP address of domain ha-269108 in network mk-ha-269108
	I0719 18:30:39.559960   33150 main.go:141] libmachine: (ha-269108) DBG | I0719 18:30:39.559885   33173 retry.go:31] will retry after 1.622391207s: waiting for machine to come up
	I0719 18:30:41.184707   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:41.185100   33150 main.go:141] libmachine: (ha-269108) DBG | unable to find current IP address of domain ha-269108 in network mk-ha-269108
	I0719 18:30:41.185129   33150 main.go:141] libmachine: (ha-269108) DBG | I0719 18:30:41.185034   33173 retry.go:31] will retry after 2.072642534s: waiting for machine to come up
	I0719 18:30:43.260149   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:43.260515   33150 main.go:141] libmachine: (ha-269108) DBG | unable to find current IP address of domain ha-269108 in network mk-ha-269108
	I0719 18:30:43.260539   33150 main.go:141] libmachine: (ha-269108) DBG | I0719 18:30:43.260482   33173 retry.go:31] will retry after 2.274377543s: waiting for machine to come up
	I0719 18:30:45.537822   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:45.538132   33150 main.go:141] libmachine: (ha-269108) DBG | unable to find current IP address of domain ha-269108 in network mk-ha-269108
	I0719 18:30:45.538191   33150 main.go:141] libmachine: (ha-269108) DBG | I0719 18:30:45.538106   33173 retry.go:31] will retry after 4.101164733s: waiting for machine to come up
	I0719 18:30:49.642837   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:49.643211   33150 main.go:141] libmachine: (ha-269108) DBG | unable to find current IP address of domain ha-269108 in network mk-ha-269108
	I0719 18:30:49.643234   33150 main.go:141] libmachine: (ha-269108) DBG | I0719 18:30:49.643170   33173 retry.go:31] will retry after 5.540453029s: waiting for machine to come up
	I0719 18:30:55.187752   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:55.188145   33150 main.go:141] libmachine: (ha-269108) Found IP for machine: 192.168.39.163
	I0719 18:30:55.188167   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has current primary IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:55.188176   33150 main.go:141] libmachine: (ha-269108) Reserving static IP address...
	I0719 18:30:55.188602   33150 main.go:141] libmachine: (ha-269108) DBG | unable to find host DHCP lease matching {name: "ha-269108", mac: "52:54:00:0f:2d:bd", ip: "192.168.39.163"} in network mk-ha-269108
	I0719 18:30:55.258637   33150 main.go:141] libmachine: (ha-269108) DBG | Getting to WaitForSSH function...
	I0719 18:30:55.258667   33150 main.go:141] libmachine: (ha-269108) Reserved static IP address: 192.168.39.163
	I0719 18:30:55.258682   33150 main.go:141] libmachine: (ha-269108) Waiting for SSH to be available...
	I0719 18:30:55.261529   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:55.261925   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:30:55.261946   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:55.262112   33150 main.go:141] libmachine: (ha-269108) DBG | Using SSH client type: external
	I0719 18:30:55.262137   33150 main.go:141] libmachine: (ha-269108) DBG | Using SSH private key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa (-rw-------)
	I0719 18:30:55.262173   33150 main.go:141] libmachine: (ha-269108) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.163 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 18:30:55.262187   33150 main.go:141] libmachine: (ha-269108) DBG | About to run SSH command:
	I0719 18:30:55.262199   33150 main.go:141] libmachine: (ha-269108) DBG | exit 0
	I0719 18:30:55.383685   33150 main.go:141] libmachine: (ha-269108) DBG | SSH cmd err, output: <nil>: 
	I0719 18:30:55.383964   33150 main.go:141] libmachine: (ha-269108) KVM machine creation complete!
	I0719 18:30:55.384297   33150 main.go:141] libmachine: (ha-269108) Calling .GetConfigRaw
	I0719 18:30:55.384870   33150 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:30:55.385087   33150 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:30:55.385297   33150 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0719 18:30:55.385314   33150 main.go:141] libmachine: (ha-269108) Calling .GetState
	I0719 18:30:55.386580   33150 main.go:141] libmachine: Detecting operating system of created instance...
	I0719 18:30:55.386604   33150 main.go:141] libmachine: Waiting for SSH to be available...
	I0719 18:30:55.386610   33150 main.go:141] libmachine: Getting to WaitForSSH function...
	I0719 18:30:55.386616   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:30:55.389184   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:55.389721   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:30:55.389744   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:55.389925   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:30:55.390106   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:30:55.390255   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:30:55.390423   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:30:55.390598   33150 main.go:141] libmachine: Using SSH client type: native
	I0719 18:30:55.390794   33150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0719 18:30:55.390806   33150 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0719 18:30:55.486923   33150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 18:30:55.486947   33150 main.go:141] libmachine: Detecting the provisioner...
	I0719 18:30:55.486953   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:30:55.489905   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:55.490298   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:30:55.490322   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:55.490498   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:30:55.490686   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:30:55.490869   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:30:55.491030   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:30:55.491188   33150 main.go:141] libmachine: Using SSH client type: native
	I0719 18:30:55.491380   33150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0719 18:30:55.491395   33150 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0719 18:30:55.588352   33150 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0719 18:30:55.588454   33150 main.go:141] libmachine: found compatible host: buildroot
	I0719 18:30:55.588469   33150 main.go:141] libmachine: Provisioning with buildroot...
	I0719 18:30:55.588480   33150 main.go:141] libmachine: (ha-269108) Calling .GetMachineName
	I0719 18:30:55.588752   33150 buildroot.go:166] provisioning hostname "ha-269108"
	I0719 18:30:55.588774   33150 main.go:141] libmachine: (ha-269108) Calling .GetMachineName
	I0719 18:30:55.588974   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:30:55.591207   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:55.591532   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:30:55.591565   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:55.591786   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:30:55.591974   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:30:55.592166   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:30:55.592434   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:30:55.592710   33150 main.go:141] libmachine: Using SSH client type: native
	I0719 18:30:55.592927   33150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0719 18:30:55.592943   33150 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-269108 && echo "ha-269108" | sudo tee /etc/hostname
	I0719 18:30:55.705281   33150 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-269108
	
	I0719 18:30:55.705308   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:30:55.708401   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:55.708809   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:30:55.708849   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:55.709025   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:30:55.709223   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:30:55.709371   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:30:55.709613   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:30:55.709825   33150 main.go:141] libmachine: Using SSH client type: native
	I0719 18:30:55.709984   33150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0719 18:30:55.710000   33150 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-269108' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-269108/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-269108' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 18:30:55.817740   33150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 18:30:55.817771   33150 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 18:30:55.817821   33150 buildroot.go:174] setting up certificates
	I0719 18:30:55.817834   33150 provision.go:84] configureAuth start
	I0719 18:30:55.817852   33150 main.go:141] libmachine: (ha-269108) Calling .GetMachineName
	I0719 18:30:55.818142   33150 main.go:141] libmachine: (ha-269108) Calling .GetIP
	I0719 18:30:55.821007   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:55.821339   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:30:55.821359   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:55.821512   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:30:55.823749   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:55.824121   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:30:55.824140   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:55.824291   33150 provision.go:143] copyHostCerts
	I0719 18:30:55.824341   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 18:30:55.824375   33150 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 18:30:55.824391   33150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 18:30:55.824460   33150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 18:30:55.824547   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 18:30:55.824571   33150 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 18:30:55.824588   33150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 18:30:55.824630   33150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 18:30:55.824687   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 18:30:55.824711   33150 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 18:30:55.824720   33150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 18:30:55.824749   33150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 18:30:55.824810   33150 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.ha-269108 san=[127.0.0.1 192.168.39.163 ha-269108 localhost minikube]
	I0719 18:30:55.988123   33150 provision.go:177] copyRemoteCerts
	I0719 18:30:55.988178   33150 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 18:30:55.988209   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:30:55.990973   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:55.991349   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:30:55.991371   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:55.991622   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:30:55.991879   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:30:55.992043   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:30:55.992237   33150 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	I0719 18:30:56.068799   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0719 18:30:56.068859   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 18:30:56.090604   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0719 18:30:56.090660   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0719 18:30:56.112129   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0719 18:30:56.112203   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 18:30:56.133910   33150 provision.go:87] duration metric: took 316.058428ms to configureAuth
	I0719 18:30:56.133936   33150 buildroot.go:189] setting minikube options for container-runtime
	I0719 18:30:56.134132   33150 config.go:182] Loaded profile config "ha-269108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:30:56.134211   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:30:56.137205   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:56.137537   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:30:56.137557   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:56.137739   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:30:56.137968   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:30:56.138147   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:30:56.138292   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:30:56.138519   33150 main.go:141] libmachine: Using SSH client type: native
	I0719 18:30:56.138701   33150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0719 18:30:56.138718   33150 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 18:30:56.385670   33150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 18:30:56.385697   33150 main.go:141] libmachine: Checking connection to Docker...
	I0719 18:30:56.385720   33150 main.go:141] libmachine: (ha-269108) Calling .GetURL
	I0719 18:30:56.387044   33150 main.go:141] libmachine: (ha-269108) DBG | Using libvirt version 6000000
	I0719 18:30:56.389406   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:56.389747   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:30:56.389775   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:56.389938   33150 main.go:141] libmachine: Docker is up and running!
	I0719 18:30:56.389953   33150 main.go:141] libmachine: Reticulating splines...
	I0719 18:30:56.389959   33150 client.go:171] duration metric: took 24.88550781s to LocalClient.Create
	I0719 18:30:56.389981   33150 start.go:167] duration metric: took 24.885561275s to libmachine.API.Create "ha-269108"
	I0719 18:30:56.389990   33150 start.go:293] postStartSetup for "ha-269108" (driver="kvm2")
	I0719 18:30:56.389999   33150 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 18:30:56.390029   33150 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:30:56.390383   33150 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 18:30:56.390408   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:30:56.392414   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:56.392758   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:30:56.392777   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:56.392911   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:30:56.393126   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:30:56.393312   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:30:56.393470   33150 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	I0719 18:30:56.469332   33150 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 18:30:56.473167   33150 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 18:30:56.473193   33150 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 18:30:56.473248   33150 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 18:30:56.473313   33150 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 18:30:56.473323   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> /etc/ssl/certs/220282.pem
	I0719 18:30:56.473415   33150 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 18:30:56.481923   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 18:30:56.503527   33150 start.go:296] duration metric: took 113.525152ms for postStartSetup
	I0719 18:30:56.503578   33150 main.go:141] libmachine: (ha-269108) Calling .GetConfigRaw
	I0719 18:30:56.504146   33150 main.go:141] libmachine: (ha-269108) Calling .GetIP
	I0719 18:30:56.506944   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:56.507327   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:30:56.507353   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:56.507706   33150 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/config.json ...
	I0719 18:30:56.507900   33150 start.go:128] duration metric: took 25.020900922s to createHost
	I0719 18:30:56.507923   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:30:56.510126   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:56.510459   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:30:56.510482   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:56.510608   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:30:56.510801   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:30:56.510937   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:30:56.511051   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:30:56.511199   33150 main.go:141] libmachine: Using SSH client type: native
	I0719 18:30:56.511360   33150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0719 18:30:56.511376   33150 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 18:30:56.608287   33150 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721413856.588592402
	
	I0719 18:30:56.608330   33150 fix.go:216] guest clock: 1721413856.588592402
	I0719 18:30:56.608341   33150 fix.go:229] Guest: 2024-07-19 18:30:56.588592402 +0000 UTC Remote: 2024-07-19 18:30:56.5079122 +0000 UTC m=+25.120917110 (delta=80.680202ms)
	I0719 18:30:56.608371   33150 fix.go:200] guest clock delta is within tolerance: 80.680202ms
	I0719 18:30:56.608378   33150 start.go:83] releasing machines lock for "ha-269108", held for 25.121454089s
	I0719 18:30:56.608399   33150 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:30:56.608683   33150 main.go:141] libmachine: (ha-269108) Calling .GetIP
	I0719 18:30:56.611161   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:56.611464   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:30:56.611486   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:56.611682   33150 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:30:56.612173   33150 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:30:56.612349   33150 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:30:56.612447   33150 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 18:30:56.612496   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:30:56.612557   33150 ssh_runner.go:195] Run: cat /version.json
	I0719 18:30:56.612581   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:30:56.615000   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:56.615329   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:30:56.615360   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:56.615486   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:56.615519   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:30:56.615783   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:30:56.615949   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:30:56.615975   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:30:56.616002   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:56.616122   33150 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	I0719 18:30:56.616159   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:30:56.616329   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:30:56.616498   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:30:56.616649   33150 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	I0719 18:30:56.689030   33150 ssh_runner.go:195] Run: systemctl --version
	I0719 18:30:56.724548   33150 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 18:30:56.880613   33150 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 18:30:56.886666   33150 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 18:30:56.886742   33150 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 18:30:56.901684   33150 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 18:30:56.901707   33150 start.go:495] detecting cgroup driver to use...
	I0719 18:30:56.901774   33150 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 18:30:56.917945   33150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 18:30:56.932470   33150 docker.go:217] disabling cri-docker service (if available) ...
	I0719 18:30:56.932530   33150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 18:30:56.946517   33150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 18:30:56.959847   33150 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 18:30:57.070914   33150 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 18:30:57.211165   33150 docker.go:233] disabling docker service ...
	I0719 18:30:57.211241   33150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 18:30:57.225219   33150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 18:30:57.238602   33150 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 18:30:57.372703   33150 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 18:30:57.479211   33150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 18:30:57.494650   33150 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 18:30:57.512970   33150 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 18:30:57.513034   33150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:30:57.523939   33150 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 18:30:57.524027   33150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:30:57.534681   33150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:30:57.545541   33150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:30:57.556278   33150 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 18:30:57.567481   33150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:30:57.578179   33150 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:30:57.595082   33150 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:30:57.606021   33150 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 18:30:57.615980   33150 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 18:30:57.616025   33150 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 18:30:57.629267   33150 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 18:30:57.639288   33150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 18:30:57.755011   33150 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 18:30:57.882249   33150 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 18:30:57.882342   33150 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 18:30:57.886772   33150 start.go:563] Will wait 60s for crictl version
	I0719 18:30:57.886817   33150 ssh_runner.go:195] Run: which crictl
	I0719 18:30:57.890076   33150 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 18:30:57.927197   33150 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 18:30:57.927274   33150 ssh_runner.go:195] Run: crio --version
	I0719 18:30:57.953210   33150 ssh_runner.go:195] Run: crio --version
	I0719 18:30:57.980690   33150 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 18:30:57.982008   33150 main.go:141] libmachine: (ha-269108) Calling .GetIP
	I0719 18:30:57.984791   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:57.985146   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:30:57.985191   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:57.985319   33150 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 18:30:57.989005   33150 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 18:30:58.001238   33150 kubeadm.go:883] updating cluster {Name:ha-269108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-269108 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 18:30:58.001464   33150 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 18:30:58.001552   33150 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 18:30:58.030865   33150 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0719 18:30:58.030940   33150 ssh_runner.go:195] Run: which lz4
	I0719 18:30:58.034438   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0719 18:30:58.034519   33150 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 18:30:58.038297   33150 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 18:30:58.038341   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0719 18:30:59.268077   33150 crio.go:462] duration metric: took 1.233578451s to copy over tarball
	I0719 18:30:59.268156   33150 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 18:31:01.397481   33150 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.129301181s)
	I0719 18:31:01.397510   33150 crio.go:469] duration metric: took 2.129404065s to extract the tarball
	I0719 18:31:01.397519   33150 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 18:31:01.434652   33150 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 18:31:01.475962   33150 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 18:31:01.475983   33150 cache_images.go:84] Images are preloaded, skipping loading
	I0719 18:31:01.475991   33150 kubeadm.go:934] updating node { 192.168.39.163 8443 v1.30.3 crio true true} ...
	I0719 18:31:01.476089   33150 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-269108 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.163
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-269108 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 18:31:01.476158   33150 ssh_runner.go:195] Run: crio config
	I0719 18:31:01.518892   33150 cni.go:84] Creating CNI manager for ""
	I0719 18:31:01.518909   33150 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0719 18:31:01.518918   33150 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 18:31:01.518937   33150 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.163 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-269108 NodeName:ha-269108 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.163"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.163 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 18:31:01.519109   33150 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.163
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-269108"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.163
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.163"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 18:31:01.519137   33150 kube-vip.go:115] generating kube-vip config ...
	I0719 18:31:01.519182   33150 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0719 18:31:01.533875   33150 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0719 18:31:01.533979   33150 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0719 18:31:01.534035   33150 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 18:31:01.543190   33150 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 18:31:01.543244   33150 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0719 18:31:01.551877   33150 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0719 18:31:01.567026   33150 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 18:31:01.581462   33150 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0719 18:31:01.595661   33150 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0719 18:31:01.610032   33150 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0719 18:31:01.613479   33150 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 18:31:01.624196   33150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 18:31:01.751552   33150 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 18:31:01.768136   33150 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108 for IP: 192.168.39.163
	I0719 18:31:01.768159   33150 certs.go:194] generating shared ca certs ...
	I0719 18:31:01.768179   33150 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:31:01.768345   33150 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 18:31:01.768417   33150 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 18:31:01.768429   33150 certs.go:256] generating profile certs ...
	I0719 18:31:01.768494   33150 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/client.key
	I0719 18:31:01.768512   33150 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/client.crt with IP's: []
	I0719 18:31:01.850221   33150 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/client.crt ...
	I0719 18:31:01.850245   33150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/client.crt: {Name:mk1bbdb3d9eee875d6d8d38aa2a4774f7a83ec0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:31:01.850433   33150 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/client.key ...
	I0719 18:31:01.850448   33150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/client.key: {Name:mk6c8838f2c3df45dc84b178c31aa71d51583a4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:31:01.850549   33150 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key.ab4bef6d
	I0719 18:31:01.850564   33150 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt.ab4bef6d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.163 192.168.39.254]
	I0719 18:31:02.131129   33150 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt.ab4bef6d ...
	I0719 18:31:02.131159   33150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt.ab4bef6d: {Name:mka5712a99242119887a39a3dc4313da01e0ff4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:31:02.131327   33150 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key.ab4bef6d ...
	I0719 18:31:02.131343   33150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key.ab4bef6d: {Name:mk6a417aa50ec31953502174b7f9c41a2557c40c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:31:02.131450   33150 certs.go:381] copying /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt.ab4bef6d -> /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt
	I0719 18:31:02.131537   33150 certs.go:385] copying /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key.ab4bef6d -> /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key
	I0719 18:31:02.131594   33150 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.key
	I0719 18:31:02.131607   33150 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.crt with IP's: []
	I0719 18:31:02.188179   33150 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.crt ...
	I0719 18:31:02.188208   33150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.crt: {Name:mk8634b7706115d209192d55859676f11a271bae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:31:02.188383   33150 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.key ...
	I0719 18:31:02.188398   33150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.key: {Name:mk3396d6a2dd4b0e3c48be746c4b2c3be317c855 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:31:02.188486   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 18:31:02.188509   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0719 18:31:02.188524   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 18:31:02.188540   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 18:31:02.188554   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 18:31:02.188571   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 18:31:02.188588   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 18:31:02.188600   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 18:31:02.188649   33150 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 18:31:02.188682   33150 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 18:31:02.188691   33150 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 18:31:02.188711   33150 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 18:31:02.188732   33150 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 18:31:02.188750   33150 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 18:31:02.188789   33150 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 18:31:02.188830   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 18:31:02.188845   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem -> /usr/share/ca-certificates/22028.pem
	I0719 18:31:02.188857   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> /usr/share/ca-certificates/220282.pem
	I0719 18:31:02.189353   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 18:31:02.214973   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 18:31:02.240024   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 18:31:02.263329   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 18:31:02.285700   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0719 18:31:02.308070   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 18:31:02.330239   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 18:31:02.352293   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 18:31:02.374566   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 18:31:02.397173   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 18:31:02.421367   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 18:31:02.443990   33150 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 18:31:02.459603   33150 ssh_runner.go:195] Run: openssl version
	I0719 18:31:02.464978   33150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 18:31:02.475394   33150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 18:31:02.479504   33150 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 18:31:02.479560   33150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 18:31:02.485097   33150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 18:31:02.496863   33150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 18:31:02.508107   33150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 18:31:02.512260   33150 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 18:31:02.512311   33150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 18:31:02.517822   33150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 18:31:02.528933   33150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 18:31:02.540512   33150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 18:31:02.544670   33150 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 18:31:02.544711   33150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 18:31:02.549980   33150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 18:31:02.565760   33150 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 18:31:02.570861   33150 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 18:31:02.570913   33150 kubeadm.go:392] StartCluster: {Name:ha-269108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-269108 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 18:31:02.570992   33150 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 18:31:02.571040   33150 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 18:31:02.636018   33150 cri.go:89] found id: ""
	I0719 18:31:02.636085   33150 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 18:31:02.646956   33150 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 18:31:02.655969   33150 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 18:31:02.665048   33150 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 18:31:02.665062   33150 kubeadm.go:157] found existing configuration files:
	
	I0719 18:31:02.665105   33150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 18:31:02.673592   33150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 18:31:02.673665   33150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 18:31:02.682219   33150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 18:31:02.690457   33150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 18:31:02.690501   33150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 18:31:02.699219   33150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 18:31:02.707799   33150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 18:31:02.707864   33150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 18:31:02.716764   33150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 18:31:02.725066   33150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 18:31:02.725118   33150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 18:31:02.733678   33150 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 18:31:02.959424   33150 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 18:31:13.703075   33150 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0719 18:31:13.703157   33150 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 18:31:13.703231   33150 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 18:31:13.703346   33150 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 18:31:13.703453   33150 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 18:31:13.703541   33150 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 18:31:13.705067   33150 out.go:204]   - Generating certificates and keys ...
	I0719 18:31:13.705177   33150 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 18:31:13.705258   33150 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 18:31:13.705342   33150 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0719 18:31:13.705440   33150 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0719 18:31:13.705511   33150 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0719 18:31:13.705555   33150 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0719 18:31:13.705644   33150 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0719 18:31:13.705780   33150 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-269108 localhost] and IPs [192.168.39.163 127.0.0.1 ::1]
	I0719 18:31:13.705858   33150 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0719 18:31:13.706012   33150 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-269108 localhost] and IPs [192.168.39.163 127.0.0.1 ::1]
	I0719 18:31:13.706085   33150 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0719 18:31:13.706150   33150 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0719 18:31:13.706193   33150 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0719 18:31:13.706240   33150 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 18:31:13.706312   33150 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 18:31:13.706414   33150 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 18:31:13.706475   33150 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 18:31:13.706536   33150 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 18:31:13.706580   33150 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 18:31:13.706656   33150 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 18:31:13.706719   33150 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 18:31:13.708161   33150 out.go:204]   - Booting up control plane ...
	I0719 18:31:13.708234   33150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 18:31:13.708297   33150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 18:31:13.708352   33150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 18:31:13.708442   33150 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 18:31:13.708545   33150 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 18:31:13.708612   33150 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 18:31:13.708752   33150 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 18:31:13.708816   33150 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 18:31:13.708889   33150 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.465729ms
	I0719 18:31:13.708954   33150 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 18:31:13.709006   33150 kubeadm.go:310] [api-check] The API server is healthy after 6.042829217s
	I0719 18:31:13.709095   33150 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 18:31:13.709205   33150 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 18:31:13.709261   33150 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 18:31:13.709413   33150 kubeadm.go:310] [mark-control-plane] Marking the node ha-269108 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 18:31:13.709483   33150 kubeadm.go:310] [bootstrap-token] Using token: l73je0.11l4jtjctlpyqoil
	I0719 18:31:13.710722   33150 out.go:204]   - Configuring RBAC rules ...
	I0719 18:31:13.710826   33150 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 18:31:13.710909   33150 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 18:31:13.711062   33150 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 18:31:13.711206   33150 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 18:31:13.711338   33150 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 18:31:13.711417   33150 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 18:31:13.711519   33150 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 18:31:13.711562   33150 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 18:31:13.711601   33150 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 18:31:13.711607   33150 kubeadm.go:310] 
	I0719 18:31:13.711676   33150 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 18:31:13.711686   33150 kubeadm.go:310] 
	I0719 18:31:13.711790   33150 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 18:31:13.711798   33150 kubeadm.go:310] 
	I0719 18:31:13.711840   33150 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 18:31:13.711899   33150 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 18:31:13.711947   33150 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 18:31:13.711953   33150 kubeadm.go:310] 
	I0719 18:31:13.712000   33150 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 18:31:13.712005   33150 kubeadm.go:310] 
	I0719 18:31:13.712043   33150 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 18:31:13.712049   33150 kubeadm.go:310] 
	I0719 18:31:13.712098   33150 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 18:31:13.712161   33150 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 18:31:13.712218   33150 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 18:31:13.712223   33150 kubeadm.go:310] 
	I0719 18:31:13.712294   33150 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 18:31:13.712361   33150 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 18:31:13.712366   33150 kubeadm.go:310] 
	I0719 18:31:13.712441   33150 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token l73je0.11l4jtjctlpyqoil \
	I0719 18:31:13.712550   33150 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 \
	I0719 18:31:13.712582   33150 kubeadm.go:310] 	--control-plane 
	I0719 18:31:13.712591   33150 kubeadm.go:310] 
	I0719 18:31:13.712694   33150 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 18:31:13.712709   33150 kubeadm.go:310] 
	I0719 18:31:13.712830   33150 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token l73je0.11l4jtjctlpyqoil \
	I0719 18:31:13.712976   33150 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 
	I0719 18:31:13.712990   33150 cni.go:84] Creating CNI manager for ""
	I0719 18:31:13.712996   33150 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0719 18:31:13.714494   33150 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0719 18:31:13.715613   33150 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0719 18:31:13.720394   33150 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0719 18:31:13.720411   33150 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0719 18:31:13.738689   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0719 18:31:14.082910   33150 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 18:31:14.083025   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:14.083061   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-269108 minikube.k8s.io/updated_at=2024_07_19T18_31_14_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221 minikube.k8s.io/name=ha-269108 minikube.k8s.io/primary=true
	I0719 18:31:14.104467   33150 ops.go:34] apiserver oom_adj: -16
	I0719 18:31:14.204309   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:14.705035   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:15.204661   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:15.704547   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:16.205339   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:16.704865   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:17.204998   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:17.705161   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:18.204488   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:18.705313   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:19.205391   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:19.705056   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:20.204936   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:20.704455   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:21.204347   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:21.705091   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:22.204518   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:22.705204   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:23.205078   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:23.704886   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:24.204991   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:24.704860   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:25.204471   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:25.705201   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:26.205340   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:26.705092   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:26.800735   33150 kubeadm.go:1113] duration metric: took 12.717785097s to wait for elevateKubeSystemPrivileges
	I0719 18:31:26.800770   33150 kubeadm.go:394] duration metric: took 24.229860376s to StartCluster
	I0719 18:31:26.800791   33150 settings.go:142] acquiring lock: {Name:mkcf95271f2ac91a55c2f4ae0f8a0ccaa24777be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:31:26.800884   33150 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 18:31:26.801658   33150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/kubeconfig: {Name:mk0bbb5f0de0e816a151363ad4427bbf9de52f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:31:26.801887   33150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0719 18:31:26.801905   33150 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 18:31:26.801949   33150 addons.go:69] Setting storage-provisioner=true in profile "ha-269108"
	I0719 18:31:26.801979   33150 addons.go:234] Setting addon storage-provisioner=true in "ha-269108"
	I0719 18:31:26.801883   33150 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 18:31:26.802001   33150 addons.go:69] Setting default-storageclass=true in profile "ha-269108"
	I0719 18:31:26.802046   33150 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-269108"
	I0719 18:31:26.802092   33150 config.go:182] Loaded profile config "ha-269108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:31:26.802004   33150 start.go:241] waiting for startup goroutines ...
	I0719 18:31:26.802006   33150 host.go:66] Checking if "ha-269108" exists ...
	I0719 18:31:26.802460   33150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:31:26.802482   33150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:31:26.802497   33150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:31:26.802521   33150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:31:26.817752   33150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44105
	I0719 18:31:26.817982   33150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44245
	I0719 18:31:26.818243   33150 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:31:26.818472   33150 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:31:26.818784   33150 main.go:141] libmachine: Using API Version  1
	I0719 18:31:26.818800   33150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:31:26.818955   33150 main.go:141] libmachine: Using API Version  1
	I0719 18:31:26.818968   33150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:31:26.819194   33150 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:31:26.819282   33150 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:31:26.819440   33150 main.go:141] libmachine: (ha-269108) Calling .GetState
	I0719 18:31:26.819932   33150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:31:26.819977   33150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:31:26.821832   33150 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 18:31:26.822183   33150 kapi.go:59] client config for ha-269108: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/client.crt", KeyFile:"/home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/client.key", CAFile:"/home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 18:31:26.822766   33150 cert_rotation.go:137] Starting client certificate rotation controller
	I0719 18:31:26.822978   33150 addons.go:234] Setting addon default-storageclass=true in "ha-269108"
	I0719 18:31:26.823018   33150 host.go:66] Checking if "ha-269108" exists ...
	I0719 18:31:26.823350   33150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:31:26.823395   33150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:31:26.835506   33150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35743
	I0719 18:31:26.835953   33150 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:31:26.836451   33150 main.go:141] libmachine: Using API Version  1
	I0719 18:31:26.836471   33150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:31:26.836768   33150 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:31:26.836964   33150 main.go:141] libmachine: (ha-269108) Calling .GetState
	I0719 18:31:26.837934   33150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43837
	I0719 18:31:26.838378   33150 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:31:26.838884   33150 main.go:141] libmachine: Using API Version  1
	I0719 18:31:26.838901   33150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:31:26.838916   33150 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:31:26.839172   33150 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:31:26.839725   33150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:31:26.839769   33150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:31:26.841333   33150 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 18:31:26.843028   33150 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 18:31:26.843046   33150 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 18:31:26.843063   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:31:26.846242   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:31:26.846701   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:31:26.846734   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:31:26.846981   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:31:26.847191   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:31:26.847333   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:31:26.847495   33150 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	I0719 18:31:26.854341   33150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34269
	I0719 18:31:26.854744   33150 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:31:26.855200   33150 main.go:141] libmachine: Using API Version  1
	I0719 18:31:26.855223   33150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:31:26.855570   33150 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:31:26.855765   33150 main.go:141] libmachine: (ha-269108) Calling .GetState
	I0719 18:31:26.857239   33150 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:31:26.857436   33150 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 18:31:26.857450   33150 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 18:31:26.857464   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:31:26.860456   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:31:26.860893   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:31:26.860913   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:31:26.861143   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:31:26.861307   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:31:26.861466   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:31:26.861588   33150 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	I0719 18:31:26.891129   33150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0719 18:31:26.999097   33150 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 18:31:27.034517   33150 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 18:31:27.364114   33150 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0719 18:31:27.446502   33150 main.go:141] libmachine: Making call to close driver server
	I0719 18:31:27.446529   33150 main.go:141] libmachine: (ha-269108) Calling .Close
	I0719 18:31:27.446800   33150 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:31:27.446816   33150 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:31:27.446824   33150 main.go:141] libmachine: Making call to close driver server
	I0719 18:31:27.446832   33150 main.go:141] libmachine: (ha-269108) Calling .Close
	I0719 18:31:27.446841   33150 main.go:141] libmachine: (ha-269108) DBG | Closing plugin on server side
	I0719 18:31:27.447062   33150 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:31:27.447079   33150 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:31:27.447098   33150 main.go:141] libmachine: (ha-269108) DBG | Closing plugin on server side
	I0719 18:31:27.447186   33150 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0719 18:31:27.447200   33150 round_trippers.go:469] Request Headers:
	I0719 18:31:27.447209   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:31:27.447218   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:31:27.455421   33150 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0719 18:31:27.456174   33150 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0719 18:31:27.456193   33150 round_trippers.go:469] Request Headers:
	I0719 18:31:27.456204   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:31:27.456214   33150 round_trippers.go:473]     Content-Type: application/json
	I0719 18:31:27.456221   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:31:27.458593   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:31:27.458757   33150 main.go:141] libmachine: Making call to close driver server
	I0719 18:31:27.458774   33150 main.go:141] libmachine: (ha-269108) Calling .Close
	I0719 18:31:27.459122   33150 main.go:141] libmachine: (ha-269108) DBG | Closing plugin on server side
	I0719 18:31:27.459202   33150 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:31:27.459235   33150 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:31:27.630487   33150 main.go:141] libmachine: Making call to close driver server
	I0719 18:31:27.630514   33150 main.go:141] libmachine: (ha-269108) Calling .Close
	I0719 18:31:27.630831   33150 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:31:27.630849   33150 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:31:27.630859   33150 main.go:141] libmachine: Making call to close driver server
	I0719 18:31:27.630878   33150 main.go:141] libmachine: (ha-269108) Calling .Close
	I0719 18:31:27.631146   33150 main.go:141] libmachine: (ha-269108) DBG | Closing plugin on server side
	I0719 18:31:27.631207   33150 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:31:27.631223   33150 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:31:27.632974   33150 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0719 18:31:27.634463   33150 addons.go:510] duration metric: took 832.560429ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0719 18:31:27.634495   33150 start.go:246] waiting for cluster config update ...
	I0719 18:31:27.634509   33150 start.go:255] writing updated cluster config ...
	I0719 18:31:27.636309   33150 out.go:177] 
	I0719 18:31:27.637962   33150 config.go:182] Loaded profile config "ha-269108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:31:27.638027   33150 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/config.json ...
	I0719 18:31:27.639532   33150 out.go:177] * Starting "ha-269108-m02" control-plane node in "ha-269108" cluster
	I0719 18:31:27.640602   33150 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 18:31:27.640623   33150 cache.go:56] Caching tarball of preloaded images
	I0719 18:31:27.640714   33150 preload.go:172] Found /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 18:31:27.640725   33150 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 18:31:27.640803   33150 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/config.json ...
	I0719 18:31:27.640944   33150 start.go:360] acquireMachinesLock for ha-269108-m02: {Name:mk45aeee801587237bb965fa260501e9fac6f233 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 18:31:27.640981   33150 start.go:364] duration metric: took 20.161µs to acquireMachinesLock for "ha-269108-m02"
	I0719 18:31:27.641001   33150 start.go:93] Provisioning new machine with config: &{Name:ha-269108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-269108 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 18:31:27.641080   33150 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0719 18:31:27.642702   33150 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 18:31:27.642799   33150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:31:27.642831   33150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:31:27.658005   33150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40707
	I0719 18:31:27.658402   33150 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:31:27.658896   33150 main.go:141] libmachine: Using API Version  1
	I0719 18:31:27.658923   33150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:31:27.659304   33150 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:31:27.659463   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetMachineName
	I0719 18:31:27.659641   33150 main.go:141] libmachine: (ha-269108-m02) Calling .DriverName
	I0719 18:31:27.659799   33150 start.go:159] libmachine.API.Create for "ha-269108" (driver="kvm2")
	I0719 18:31:27.659827   33150 client.go:168] LocalClient.Create starting
	I0719 18:31:27.659868   33150 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem
	I0719 18:31:27.659909   33150 main.go:141] libmachine: Decoding PEM data...
	I0719 18:31:27.659925   33150 main.go:141] libmachine: Parsing certificate...
	I0719 18:31:27.659989   33150 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem
	I0719 18:31:27.660010   33150 main.go:141] libmachine: Decoding PEM data...
	I0719 18:31:27.660027   33150 main.go:141] libmachine: Parsing certificate...
	I0719 18:31:27.660063   33150 main.go:141] libmachine: Running pre-create checks...
	I0719 18:31:27.660074   33150 main.go:141] libmachine: (ha-269108-m02) Calling .PreCreateCheck
	I0719 18:31:27.660244   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetConfigRaw
	I0719 18:31:27.660673   33150 main.go:141] libmachine: Creating machine...
	I0719 18:31:27.660686   33150 main.go:141] libmachine: (ha-269108-m02) Calling .Create
	I0719 18:31:27.660848   33150 main.go:141] libmachine: (ha-269108-m02) Creating KVM machine...
	I0719 18:31:27.662015   33150 main.go:141] libmachine: (ha-269108-m02) DBG | found existing default KVM network
	I0719 18:31:27.662208   33150 main.go:141] libmachine: (ha-269108-m02) DBG | found existing private KVM network mk-ha-269108
	I0719 18:31:27.662377   33150 main.go:141] libmachine: (ha-269108-m02) Setting up store path in /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m02 ...
	I0719 18:31:27.662400   33150 main.go:141] libmachine: (ha-269108-m02) Building disk image from file:///home/jenkins/minikube-integration/19307-14841/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 18:31:27.662488   33150 main.go:141] libmachine: (ha-269108-m02) DBG | I0719 18:31:27.662362   33536 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 18:31:27.662549   33150 main.go:141] libmachine: (ha-269108-m02) Downloading /home/jenkins/minikube-integration/19307-14841/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19307-14841/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 18:31:27.878032   33150 main.go:141] libmachine: (ha-269108-m02) DBG | I0719 18:31:27.877904   33536 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m02/id_rsa...
	I0719 18:31:28.146109   33150 main.go:141] libmachine: (ha-269108-m02) DBG | I0719 18:31:28.146005   33536 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m02/ha-269108-m02.rawdisk...
	I0719 18:31:28.146137   33150 main.go:141] libmachine: (ha-269108-m02) DBG | Writing magic tar header
	I0719 18:31:28.146147   33150 main.go:141] libmachine: (ha-269108-m02) DBG | Writing SSH key tar header
	I0719 18:31:28.146156   33150 main.go:141] libmachine: (ha-269108-m02) DBG | I0719 18:31:28.146128   33536 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m02 ...
	I0719 18:31:28.146229   33150 main.go:141] libmachine: (ha-269108-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m02
	I0719 18:31:28.146254   33150 main.go:141] libmachine: (ha-269108-m02) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m02 (perms=drwx------)
	I0719 18:31:28.146269   33150 main.go:141] libmachine: (ha-269108-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841/.minikube/machines
	I0719 18:31:28.146284   33150 main.go:141] libmachine: (ha-269108-m02) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841/.minikube/machines (perms=drwxr-xr-x)
	I0719 18:31:28.146294   33150 main.go:141] libmachine: (ha-269108-m02) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841/.minikube (perms=drwxr-xr-x)
	I0719 18:31:28.146301   33150 main.go:141] libmachine: (ha-269108-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 18:31:28.146307   33150 main.go:141] libmachine: (ha-269108-m02) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841 (perms=drwxrwxr-x)
	I0719 18:31:28.146316   33150 main.go:141] libmachine: (ha-269108-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0719 18:31:28.146324   33150 main.go:141] libmachine: (ha-269108-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0719 18:31:28.146333   33150 main.go:141] libmachine: (ha-269108-m02) Creating domain...
	I0719 18:31:28.146352   33150 main.go:141] libmachine: (ha-269108-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841
	I0719 18:31:28.146368   33150 main.go:141] libmachine: (ha-269108-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0719 18:31:28.146396   33150 main.go:141] libmachine: (ha-269108-m02) DBG | Checking permissions on dir: /home/jenkins
	I0719 18:31:28.146421   33150 main.go:141] libmachine: (ha-269108-m02) DBG | Checking permissions on dir: /home
	I0719 18:31:28.146434   33150 main.go:141] libmachine: (ha-269108-m02) DBG | Skipping /home - not owner
	I0719 18:31:28.147556   33150 main.go:141] libmachine: (ha-269108-m02) define libvirt domain using xml: 
	I0719 18:31:28.147571   33150 main.go:141] libmachine: (ha-269108-m02) <domain type='kvm'>
	I0719 18:31:28.147577   33150 main.go:141] libmachine: (ha-269108-m02)   <name>ha-269108-m02</name>
	I0719 18:31:28.147582   33150 main.go:141] libmachine: (ha-269108-m02)   <memory unit='MiB'>2200</memory>
	I0719 18:31:28.147587   33150 main.go:141] libmachine: (ha-269108-m02)   <vcpu>2</vcpu>
	I0719 18:31:28.147591   33150 main.go:141] libmachine: (ha-269108-m02)   <features>
	I0719 18:31:28.147597   33150 main.go:141] libmachine: (ha-269108-m02)     <acpi/>
	I0719 18:31:28.147601   33150 main.go:141] libmachine: (ha-269108-m02)     <apic/>
	I0719 18:31:28.147606   33150 main.go:141] libmachine: (ha-269108-m02)     <pae/>
	I0719 18:31:28.147623   33150 main.go:141] libmachine: (ha-269108-m02)     
	I0719 18:31:28.147630   33150 main.go:141] libmachine: (ha-269108-m02)   </features>
	I0719 18:31:28.147640   33150 main.go:141] libmachine: (ha-269108-m02)   <cpu mode='host-passthrough'>
	I0719 18:31:28.147652   33150 main.go:141] libmachine: (ha-269108-m02)   
	I0719 18:31:28.147656   33150 main.go:141] libmachine: (ha-269108-m02)   </cpu>
	I0719 18:31:28.147661   33150 main.go:141] libmachine: (ha-269108-m02)   <os>
	I0719 18:31:28.147668   33150 main.go:141] libmachine: (ha-269108-m02)     <type>hvm</type>
	I0719 18:31:28.147674   33150 main.go:141] libmachine: (ha-269108-m02)     <boot dev='cdrom'/>
	I0719 18:31:28.147678   33150 main.go:141] libmachine: (ha-269108-m02)     <boot dev='hd'/>
	I0719 18:31:28.147686   33150 main.go:141] libmachine: (ha-269108-m02)     <bootmenu enable='no'/>
	I0719 18:31:28.147690   33150 main.go:141] libmachine: (ha-269108-m02)   </os>
	I0719 18:31:28.147696   33150 main.go:141] libmachine: (ha-269108-m02)   <devices>
	I0719 18:31:28.147706   33150 main.go:141] libmachine: (ha-269108-m02)     <disk type='file' device='cdrom'>
	I0719 18:31:28.147721   33150 main.go:141] libmachine: (ha-269108-m02)       <source file='/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m02/boot2docker.iso'/>
	I0719 18:31:28.147732   33150 main.go:141] libmachine: (ha-269108-m02)       <target dev='hdc' bus='scsi'/>
	I0719 18:31:28.147740   33150 main.go:141] libmachine: (ha-269108-m02)       <readonly/>
	I0719 18:31:28.147750   33150 main.go:141] libmachine: (ha-269108-m02)     </disk>
	I0719 18:31:28.147759   33150 main.go:141] libmachine: (ha-269108-m02)     <disk type='file' device='disk'>
	I0719 18:31:28.147767   33150 main.go:141] libmachine: (ha-269108-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0719 18:31:28.147775   33150 main.go:141] libmachine: (ha-269108-m02)       <source file='/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m02/ha-269108-m02.rawdisk'/>
	I0719 18:31:28.147781   33150 main.go:141] libmachine: (ha-269108-m02)       <target dev='hda' bus='virtio'/>
	I0719 18:31:28.147786   33150 main.go:141] libmachine: (ha-269108-m02)     </disk>
	I0719 18:31:28.147796   33150 main.go:141] libmachine: (ha-269108-m02)     <interface type='network'>
	I0719 18:31:28.147805   33150 main.go:141] libmachine: (ha-269108-m02)       <source network='mk-ha-269108'/>
	I0719 18:31:28.147817   33150 main.go:141] libmachine: (ha-269108-m02)       <model type='virtio'/>
	I0719 18:31:28.147827   33150 main.go:141] libmachine: (ha-269108-m02)     </interface>
	I0719 18:31:28.147851   33150 main.go:141] libmachine: (ha-269108-m02)     <interface type='network'>
	I0719 18:31:28.147864   33150 main.go:141] libmachine: (ha-269108-m02)       <source network='default'/>
	I0719 18:31:28.147878   33150 main.go:141] libmachine: (ha-269108-m02)       <model type='virtio'/>
	I0719 18:31:28.147911   33150 main.go:141] libmachine: (ha-269108-m02)     </interface>
	I0719 18:31:28.147950   33150 main.go:141] libmachine: (ha-269108-m02)     <serial type='pty'>
	I0719 18:31:28.147965   33150 main.go:141] libmachine: (ha-269108-m02)       <target port='0'/>
	I0719 18:31:28.147976   33150 main.go:141] libmachine: (ha-269108-m02)     </serial>
	I0719 18:31:28.147990   33150 main.go:141] libmachine: (ha-269108-m02)     <console type='pty'>
	I0719 18:31:28.148002   33150 main.go:141] libmachine: (ha-269108-m02)       <target type='serial' port='0'/>
	I0719 18:31:28.148014   33150 main.go:141] libmachine: (ha-269108-m02)     </console>
	I0719 18:31:28.148029   33150 main.go:141] libmachine: (ha-269108-m02)     <rng model='virtio'>
	I0719 18:31:28.148041   33150 main.go:141] libmachine: (ha-269108-m02)       <backend model='random'>/dev/random</backend>
	I0719 18:31:28.148058   33150 main.go:141] libmachine: (ha-269108-m02)     </rng>
	I0719 18:31:28.148070   33150 main.go:141] libmachine: (ha-269108-m02)     
	I0719 18:31:28.148079   33150 main.go:141] libmachine: (ha-269108-m02)     
	I0719 18:31:28.148090   33150 main.go:141] libmachine: (ha-269108-m02)   </devices>
	I0719 18:31:28.148112   33150 main.go:141] libmachine: (ha-269108-m02) </domain>
	I0719 18:31:28.148126   33150 main.go:141] libmachine: (ha-269108-m02) 
	I0719 18:31:28.154651   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:bb:2c:b7 in network default
	I0719 18:31:28.155229   33150 main.go:141] libmachine: (ha-269108-m02) Ensuring networks are active...
	I0719 18:31:28.155264   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:28.155978   33150 main.go:141] libmachine: (ha-269108-m02) Ensuring network default is active
	I0719 18:31:28.156294   33150 main.go:141] libmachine: (ha-269108-m02) Ensuring network mk-ha-269108 is active
	I0719 18:31:28.156620   33150 main.go:141] libmachine: (ha-269108-m02) Getting domain xml...
	I0719 18:31:28.157377   33150 main.go:141] libmachine: (ha-269108-m02) Creating domain...
	I0719 18:31:29.357082   33150 main.go:141] libmachine: (ha-269108-m02) Waiting to get IP...
	I0719 18:31:29.357995   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:29.358485   33150 main.go:141] libmachine: (ha-269108-m02) DBG | unable to find current IP address of domain ha-269108-m02 in network mk-ha-269108
	I0719 18:31:29.358537   33150 main.go:141] libmachine: (ha-269108-m02) DBG | I0719 18:31:29.358459   33536 retry.go:31] will retry after 204.478278ms: waiting for machine to come up
	I0719 18:31:29.564846   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:29.565160   33150 main.go:141] libmachine: (ha-269108-m02) DBG | unable to find current IP address of domain ha-269108-m02 in network mk-ha-269108
	I0719 18:31:29.565186   33150 main.go:141] libmachine: (ha-269108-m02) DBG | I0719 18:31:29.565131   33536 retry.go:31] will retry after 234.726239ms: waiting for machine to come up
	I0719 18:31:29.801500   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:29.801930   33150 main.go:141] libmachine: (ha-269108-m02) DBG | unable to find current IP address of domain ha-269108-m02 in network mk-ha-269108
	I0719 18:31:29.801961   33150 main.go:141] libmachine: (ha-269108-m02) DBG | I0719 18:31:29.801875   33536 retry.go:31] will retry after 322.662599ms: waiting for machine to come up
	I0719 18:31:30.126636   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:30.127043   33150 main.go:141] libmachine: (ha-269108-m02) DBG | unable to find current IP address of domain ha-269108-m02 in network mk-ha-269108
	I0719 18:31:30.127075   33150 main.go:141] libmachine: (ha-269108-m02) DBG | I0719 18:31:30.126979   33536 retry.go:31] will retry after 468.612553ms: waiting for machine to come up
	I0719 18:31:30.597628   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:30.597984   33150 main.go:141] libmachine: (ha-269108-m02) DBG | unable to find current IP address of domain ha-269108-m02 in network mk-ha-269108
	I0719 18:31:30.598014   33150 main.go:141] libmachine: (ha-269108-m02) DBG | I0719 18:31:30.597947   33536 retry.go:31] will retry after 487.953425ms: waiting for machine to come up
	I0719 18:31:31.087648   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:31.088143   33150 main.go:141] libmachine: (ha-269108-m02) DBG | unable to find current IP address of domain ha-269108-m02 in network mk-ha-269108
	I0719 18:31:31.088173   33150 main.go:141] libmachine: (ha-269108-m02) DBG | I0719 18:31:31.088068   33536 retry.go:31] will retry after 864.735779ms: waiting for machine to come up
	I0719 18:31:31.953916   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:31.954368   33150 main.go:141] libmachine: (ha-269108-m02) DBG | unable to find current IP address of domain ha-269108-m02 in network mk-ha-269108
	I0719 18:31:31.954398   33150 main.go:141] libmachine: (ha-269108-m02) DBG | I0719 18:31:31.954322   33536 retry.go:31] will retry after 960.278085ms: waiting for machine to come up
	I0719 18:31:32.915680   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:32.916153   33150 main.go:141] libmachine: (ha-269108-m02) DBG | unable to find current IP address of domain ha-269108-m02 in network mk-ha-269108
	I0719 18:31:32.916182   33150 main.go:141] libmachine: (ha-269108-m02) DBG | I0719 18:31:32.916106   33536 retry.go:31] will retry after 1.384009045s: waiting for machine to come up
	I0719 18:31:34.301188   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:34.301541   33150 main.go:141] libmachine: (ha-269108-m02) DBG | unable to find current IP address of domain ha-269108-m02 in network mk-ha-269108
	I0719 18:31:34.301564   33150 main.go:141] libmachine: (ha-269108-m02) DBG | I0719 18:31:34.301505   33536 retry.go:31] will retry after 1.396513064s: waiting for machine to come up
	I0719 18:31:35.700082   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:35.700506   33150 main.go:141] libmachine: (ha-269108-m02) DBG | unable to find current IP address of domain ha-269108-m02 in network mk-ha-269108
	I0719 18:31:35.700543   33150 main.go:141] libmachine: (ha-269108-m02) DBG | I0719 18:31:35.700424   33536 retry.go:31] will retry after 2.311417499s: waiting for machine to come up
	I0719 18:31:38.013930   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:38.014402   33150 main.go:141] libmachine: (ha-269108-m02) DBG | unable to find current IP address of domain ha-269108-m02 in network mk-ha-269108
	I0719 18:31:38.014443   33150 main.go:141] libmachine: (ha-269108-m02) DBG | I0719 18:31:38.014339   33536 retry.go:31] will retry after 2.87643737s: waiting for machine to come up
	I0719 18:31:40.894223   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:40.894675   33150 main.go:141] libmachine: (ha-269108-m02) DBG | unable to find current IP address of domain ha-269108-m02 in network mk-ha-269108
	I0719 18:31:40.894697   33150 main.go:141] libmachine: (ha-269108-m02) DBG | I0719 18:31:40.894639   33536 retry.go:31] will retry after 3.244338926s: waiting for machine to come up
	I0719 18:31:44.140898   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:44.141390   33150 main.go:141] libmachine: (ha-269108-m02) DBG | unable to find current IP address of domain ha-269108-m02 in network mk-ha-269108
	I0719 18:31:44.141418   33150 main.go:141] libmachine: (ha-269108-m02) DBG | I0719 18:31:44.141344   33536 retry.go:31] will retry after 3.922811888s: waiting for machine to come up
	I0719 18:31:48.065990   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:48.066396   33150 main.go:141] libmachine: (ha-269108-m02) Found IP for machine: 192.168.39.87
	I0719 18:31:48.066425   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has current primary IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:48.066434   33150 main.go:141] libmachine: (ha-269108-m02) Reserving static IP address...
	I0719 18:31:48.066744   33150 main.go:141] libmachine: (ha-269108-m02) DBG | unable to find host DHCP lease matching {name: "ha-269108-m02", mac: "52:54:00:6e:a7:f0", ip: "192.168.39.87"} in network mk-ha-269108
	I0719 18:31:48.137287   33150 main.go:141] libmachine: (ha-269108-m02) DBG | Getting to WaitForSSH function...
	I0719 18:31:48.137350   33150 main.go:141] libmachine: (ha-269108-m02) Reserved static IP address: 192.168.39.87
	I0719 18:31:48.137394   33150 main.go:141] libmachine: (ha-269108-m02) Waiting for SSH to be available...
	I0719 18:31:48.139982   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:48.140432   33150 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:31:48.140459   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:48.140484   33150 main.go:141] libmachine: (ha-269108-m02) DBG | Using SSH client type: external
	I0719 18:31:48.140523   33150 main.go:141] libmachine: (ha-269108-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m02/id_rsa (-rw-------)
	I0719 18:31:48.140573   33150 main.go:141] libmachine: (ha-269108-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.87 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 18:31:48.140595   33150 main.go:141] libmachine: (ha-269108-m02) DBG | About to run SSH command:
	I0719 18:31:48.140627   33150 main.go:141] libmachine: (ha-269108-m02) DBG | exit 0
	I0719 18:31:48.267664   33150 main.go:141] libmachine: (ha-269108-m02) DBG | SSH cmd err, output: <nil>: 
	I0719 18:31:48.267999   33150 main.go:141] libmachine: (ha-269108-m02) KVM machine creation complete!
	I0719 18:31:48.268268   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetConfigRaw
	I0719 18:31:48.268829   33150 main.go:141] libmachine: (ha-269108-m02) Calling .DriverName
	I0719 18:31:48.269049   33150 main.go:141] libmachine: (ha-269108-m02) Calling .DriverName
	I0719 18:31:48.269198   33150 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0719 18:31:48.269213   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetState
	I0719 18:31:48.270342   33150 main.go:141] libmachine: Detecting operating system of created instance...
	I0719 18:31:48.270358   33150 main.go:141] libmachine: Waiting for SSH to be available...
	I0719 18:31:48.270366   33150 main.go:141] libmachine: Getting to WaitForSSH function...
	I0719 18:31:48.270374   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHHostname
	I0719 18:31:48.272503   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:48.272832   33150 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:31:48.272868   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:48.273041   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHPort
	I0719 18:31:48.273208   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:31:48.273343   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:31:48.273482   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHUsername
	I0719 18:31:48.273668   33150 main.go:141] libmachine: Using SSH client type: native
	I0719 18:31:48.273928   33150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0719 18:31:48.273943   33150 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0719 18:31:48.382965   33150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 18:31:48.382985   33150 main.go:141] libmachine: Detecting the provisioner...
	I0719 18:31:48.382992   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHHostname
	I0719 18:31:48.385791   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:48.386183   33150 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:31:48.386226   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:48.386322   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHPort
	I0719 18:31:48.386486   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:31:48.386626   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:31:48.386788   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHUsername
	I0719 18:31:48.386932   33150 main.go:141] libmachine: Using SSH client type: native
	I0719 18:31:48.387086   33150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0719 18:31:48.387096   33150 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0719 18:31:48.500066   33150 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0719 18:31:48.500125   33150 main.go:141] libmachine: found compatible host: buildroot
	I0719 18:31:48.500131   33150 main.go:141] libmachine: Provisioning with buildroot...
	I0719 18:31:48.500139   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetMachineName
	I0719 18:31:48.500359   33150 buildroot.go:166] provisioning hostname "ha-269108-m02"
	I0719 18:31:48.500374   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetMachineName
	I0719 18:31:48.500551   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHHostname
	I0719 18:31:48.503114   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:48.503465   33150 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:31:48.503491   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:48.503637   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHPort
	I0719 18:31:48.503797   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:31:48.503935   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:31:48.504110   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHUsername
	I0719 18:31:48.504301   33150 main.go:141] libmachine: Using SSH client type: native
	I0719 18:31:48.504457   33150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0719 18:31:48.504471   33150 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-269108-m02 && echo "ha-269108-m02" | sudo tee /etc/hostname
	I0719 18:31:48.629059   33150 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-269108-m02
	
	I0719 18:31:48.629092   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHHostname
	I0719 18:31:48.631631   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:48.631952   33150 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:31:48.631983   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:48.632142   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHPort
	I0719 18:31:48.632361   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:31:48.632621   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:31:48.632767   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHUsername
	I0719 18:31:48.632932   33150 main.go:141] libmachine: Using SSH client type: native
	I0719 18:31:48.633111   33150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0719 18:31:48.633127   33150 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-269108-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-269108-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-269108-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 18:31:48.752791   33150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 18:31:48.752816   33150 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 18:31:48.752835   33150 buildroot.go:174] setting up certificates
	I0719 18:31:48.752847   33150 provision.go:84] configureAuth start
	I0719 18:31:48.752859   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetMachineName
	I0719 18:31:48.753109   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetIP
	I0719 18:31:48.755705   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:48.756161   33150 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:31:48.756190   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:48.756359   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHHostname
	I0719 18:31:48.758433   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:48.758833   33150 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:31:48.758857   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:48.758987   33150 provision.go:143] copyHostCerts
	I0719 18:31:48.759019   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 18:31:48.759046   33150 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 18:31:48.759054   33150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 18:31:48.759114   33150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 18:31:48.759197   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 18:31:48.759213   33150 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 18:31:48.759220   33150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 18:31:48.759244   33150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 18:31:48.759297   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 18:31:48.759313   33150 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 18:31:48.759319   33150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 18:31:48.759339   33150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 18:31:48.759401   33150 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.ha-269108-m02 san=[127.0.0.1 192.168.39.87 ha-269108-m02 localhost minikube]
	I0719 18:31:48.853483   33150 provision.go:177] copyRemoteCerts
	I0719 18:31:48.853535   33150 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 18:31:48.853556   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHHostname
	I0719 18:31:48.856270   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:48.856612   33150 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:31:48.856635   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:48.856853   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHPort
	I0719 18:31:48.857021   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:31:48.857159   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHUsername
	I0719 18:31:48.857312   33150 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m02/id_rsa Username:docker}
	I0719 18:31:48.941122   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0719 18:31:48.941189   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 18:31:48.964885   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0719 18:31:48.964951   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0719 18:31:48.986222   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0719 18:31:48.986286   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 18:31:49.007502   33150 provision.go:87] duration metric: took 254.642471ms to configureAuth
	I0719 18:31:49.007530   33150 buildroot.go:189] setting minikube options for container-runtime
	I0719 18:31:49.007696   33150 config.go:182] Loaded profile config "ha-269108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:31:49.007761   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHHostname
	I0719 18:31:49.010388   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:49.010739   33150 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:31:49.010760   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:49.010947   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHPort
	I0719 18:31:49.011104   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:31:49.011265   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:31:49.011369   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHUsername
	I0719 18:31:49.011542   33150 main.go:141] libmachine: Using SSH client type: native
	I0719 18:31:49.011699   33150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0719 18:31:49.011714   33150 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 18:31:49.270448   33150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 18:31:49.270472   33150 main.go:141] libmachine: Checking connection to Docker...
	I0719 18:31:49.270493   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetURL
	I0719 18:31:49.271746   33150 main.go:141] libmachine: (ha-269108-m02) DBG | Using libvirt version 6000000
	I0719 18:31:49.274029   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:49.274405   33150 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:31:49.274422   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:49.274590   33150 main.go:141] libmachine: Docker is up and running!
	I0719 18:31:49.274608   33150 main.go:141] libmachine: Reticulating splines...
	I0719 18:31:49.274614   33150 client.go:171] duration metric: took 21.614780936s to LocalClient.Create
	I0719 18:31:49.274635   33150 start.go:167] duration metric: took 21.614836287s to libmachine.API.Create "ha-269108"
	I0719 18:31:49.274647   33150 start.go:293] postStartSetup for "ha-269108-m02" (driver="kvm2")
	I0719 18:31:49.274659   33150 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 18:31:49.274687   33150 main.go:141] libmachine: (ha-269108-m02) Calling .DriverName
	I0719 18:31:49.274940   33150 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 18:31:49.274964   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHHostname
	I0719 18:31:49.277116   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:49.277417   33150 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:31:49.277444   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:49.277595   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHPort
	I0719 18:31:49.277789   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:31:49.277941   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHUsername
	I0719 18:31:49.278077   33150 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m02/id_rsa Username:docker}
	I0719 18:31:49.361625   33150 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 18:31:49.365306   33150 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 18:31:49.365354   33150 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 18:31:49.365424   33150 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 18:31:49.365494   33150 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 18:31:49.365502   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> /etc/ssl/certs/220282.pem
	I0719 18:31:49.365602   33150 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 18:31:49.374029   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 18:31:49.395428   33150 start.go:296] duration metric: took 120.768619ms for postStartSetup
	I0719 18:31:49.395482   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetConfigRaw
	I0719 18:31:49.396082   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetIP
	I0719 18:31:49.398511   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:49.398799   33150 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:31:49.398829   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:49.399014   33150 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/config.json ...
	I0719 18:31:49.399184   33150 start.go:128] duration metric: took 21.758093311s to createHost
	I0719 18:31:49.399206   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHHostname
	I0719 18:31:49.401491   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:49.401846   33150 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:31:49.401874   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:49.402007   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHPort
	I0719 18:31:49.402221   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:31:49.402403   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:31:49.402557   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHUsername
	I0719 18:31:49.402723   33150 main.go:141] libmachine: Using SSH client type: native
	I0719 18:31:49.402873   33150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0719 18:31:49.402882   33150 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 18:31:49.512177   33150 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721413909.485387400
	
	I0719 18:31:49.512200   33150 fix.go:216] guest clock: 1721413909.485387400
	I0719 18:31:49.512209   33150 fix.go:229] Guest: 2024-07-19 18:31:49.4853874 +0000 UTC Remote: 2024-07-19 18:31:49.399195075 +0000 UTC m=+78.012199986 (delta=86.192325ms)
	I0719 18:31:49.512228   33150 fix.go:200] guest clock delta is within tolerance: 86.192325ms
	I0719 18:31:49.512235   33150 start.go:83] releasing machines lock for "ha-269108-m02", held for 21.871243011s
	I0719 18:31:49.512256   33150 main.go:141] libmachine: (ha-269108-m02) Calling .DriverName
	I0719 18:31:49.512511   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetIP
	I0719 18:31:49.514775   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:49.515074   33150 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:31:49.515098   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:49.517294   33150 out.go:177] * Found network options:
	I0719 18:31:49.518577   33150 out.go:177]   - NO_PROXY=192.168.39.163
	W0719 18:31:49.519614   33150 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 18:31:49.519651   33150 main.go:141] libmachine: (ha-269108-m02) Calling .DriverName
	I0719 18:31:49.520179   33150 main.go:141] libmachine: (ha-269108-m02) Calling .DriverName
	I0719 18:31:49.520388   33150 main.go:141] libmachine: (ha-269108-m02) Calling .DriverName
	I0719 18:31:49.520483   33150 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 18:31:49.520518   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHHostname
	W0719 18:31:49.520578   33150 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 18:31:49.520650   33150 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 18:31:49.520674   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHHostname
	I0719 18:31:49.523256   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:49.523637   33150 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:31:49.523667   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:49.523687   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:49.523907   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHPort
	I0719 18:31:49.524120   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:31:49.524229   33150 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:31:49.524265   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:49.524308   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHUsername
	I0719 18:31:49.524388   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHPort
	I0719 18:31:49.524482   33150 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m02/id_rsa Username:docker}
	I0719 18:31:49.524563   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:31:49.524725   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHUsername
	I0719 18:31:49.524857   33150 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m02/id_rsa Username:docker}
	I0719 18:31:49.755543   33150 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 18:31:49.761508   33150 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 18:31:49.761577   33150 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 18:31:49.777307   33150 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 18:31:49.777327   33150 start.go:495] detecting cgroup driver to use...
	I0719 18:31:49.777390   33150 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 18:31:49.794706   33150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 18:31:49.808829   33150 docker.go:217] disabling cri-docker service (if available) ...
	I0719 18:31:49.808883   33150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 18:31:49.821430   33150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 18:31:49.833888   33150 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 18:31:49.951895   33150 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 18:31:50.105390   33150 docker.go:233] disabling docker service ...
	I0719 18:31:50.105449   33150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 18:31:50.118523   33150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 18:31:50.130348   33150 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 18:31:50.241256   33150 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 18:31:50.369648   33150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 18:31:50.382199   33150 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 18:31:50.398418   33150 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 18:31:50.398484   33150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:31:50.407703   33150 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 18:31:50.407769   33150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:31:50.417097   33150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:31:50.426706   33150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:31:50.436105   33150 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 18:31:50.445669   33150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:31:50.454852   33150 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:31:50.469899   33150 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:31:50.479120   33150 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 18:31:50.487661   33150 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 18:31:50.487708   33150 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 18:31:50.499727   33150 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 18:31:50.508159   33150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 18:31:50.615676   33150 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 18:31:50.746884   33150 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 18:31:50.746963   33150 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 18:31:50.751800   33150 start.go:563] Will wait 60s for crictl version
	I0719 18:31:50.751867   33150 ssh_runner.go:195] Run: which crictl
	I0719 18:31:50.755191   33150 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 18:31:50.794044   33150 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 18:31:50.794118   33150 ssh_runner.go:195] Run: crio --version
	I0719 18:31:50.819862   33150 ssh_runner.go:195] Run: crio --version
	I0719 18:31:50.846764   33150 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 18:31:50.848070   33150 out.go:177]   - env NO_PROXY=192.168.39.163
	I0719 18:31:50.849217   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetIP
	I0719 18:31:50.851695   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:50.852032   33150 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:31:50.852117   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:50.852245   33150 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 18:31:50.856088   33150 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 18:31:50.867464   33150 mustload.go:65] Loading cluster: ha-269108
	I0719 18:31:50.867662   33150 config.go:182] Loaded profile config "ha-269108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:31:50.867935   33150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:31:50.867970   33150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:31:50.883089   33150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39885
	I0719 18:31:50.883480   33150 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:31:50.883925   33150 main.go:141] libmachine: Using API Version  1
	I0719 18:31:50.883945   33150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:31:50.884249   33150 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:31:50.884454   33150 main.go:141] libmachine: (ha-269108) Calling .GetState
	I0719 18:31:50.886105   33150 host.go:66] Checking if "ha-269108" exists ...
	I0719 18:31:50.886456   33150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:31:50.886496   33150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:31:50.900816   33150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34207
	I0719 18:31:50.901175   33150 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:31:50.901542   33150 main.go:141] libmachine: Using API Version  1
	I0719 18:31:50.901561   33150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:31:50.901797   33150 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:31:50.901959   33150 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:31:50.902091   33150 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108 for IP: 192.168.39.87
	I0719 18:31:50.902100   33150 certs.go:194] generating shared ca certs ...
	I0719 18:31:50.902112   33150 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:31:50.902219   33150 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 18:31:50.902265   33150 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 18:31:50.902275   33150 certs.go:256] generating profile certs ...
	I0719 18:31:50.902336   33150 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/client.key
	I0719 18:31:50.902370   33150 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key.28a107b8
	I0719 18:31:50.902384   33150 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt.28a107b8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.163 192.168.39.87 192.168.39.254]
	I0719 18:31:50.955381   33150 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt.28a107b8 ...
	I0719 18:31:50.955406   33150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt.28a107b8: {Name:mk1ed57d246dd4a25373bc959e961930df56fedc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:31:50.955579   33150 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key.28a107b8 ...
	I0719 18:31:50.955595   33150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key.28a107b8: {Name:mka4a4e0c034fd089f31c9a0ff0a6ffd5c63ebcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:31:50.955687   33150 certs.go:381] copying /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt.28a107b8 -> /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt
	I0719 18:31:50.955814   33150 certs.go:385] copying /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key.28a107b8 -> /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key
	I0719 18:31:50.955974   33150 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.key
	I0719 18:31:50.955989   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 18:31:50.956002   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0719 18:31:50.956014   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 18:31:50.956026   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 18:31:50.956038   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 18:31:50.956050   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 18:31:50.956061   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 18:31:50.956073   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 18:31:50.956119   33150 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 18:31:50.956144   33150 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 18:31:50.956154   33150 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 18:31:50.956176   33150 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 18:31:50.956197   33150 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 18:31:50.956217   33150 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 18:31:50.956250   33150 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 18:31:50.956276   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem -> /usr/share/ca-certificates/22028.pem
	I0719 18:31:50.956288   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> /usr/share/ca-certificates/220282.pem
	I0719 18:31:50.956299   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 18:31:50.956328   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:31:50.959084   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:31:50.959458   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:31:50.959483   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:31:50.959678   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:31:50.959877   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:31:50.960013   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:31:50.960155   33150 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	I0719 18:31:51.028284   33150 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0719 18:31:51.032685   33150 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0719 18:31:51.042270   33150 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0719 18:31:51.045880   33150 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0719 18:31:51.055032   33150 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0719 18:31:51.058663   33150 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0719 18:31:51.067350   33150 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0719 18:31:51.070879   33150 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0719 18:31:51.080056   33150 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0719 18:31:51.083776   33150 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0719 18:31:51.092276   33150 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0719 18:31:51.096014   33150 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0719 18:31:51.104870   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 18:31:51.128000   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 18:31:51.149479   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 18:31:51.172303   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 18:31:51.194643   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0719 18:31:51.217771   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 18:31:51.241808   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 18:31:51.263063   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 18:31:51.285347   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 18:31:51.305989   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 18:31:51.326648   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 18:31:51.347173   33150 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0719 18:31:51.361828   33150 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0719 18:31:51.376656   33150 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0719 18:31:51.392563   33150 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0719 18:31:51.408924   33150 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0719 18:31:51.425071   33150 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0719 18:31:51.440006   33150 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0719 18:31:51.455007   33150 ssh_runner.go:195] Run: openssl version
	I0719 18:31:51.460909   33150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 18:31:51.471462   33150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 18:31:51.475571   33150 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 18:31:51.475631   33150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 18:31:51.481146   33150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 18:31:51.490915   33150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 18:31:51.500570   33150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 18:31:51.504339   33150 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 18:31:51.504391   33150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 18:31:51.509772   33150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 18:31:51.520293   33150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 18:31:51.530596   33150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 18:31:51.534867   33150 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 18:31:51.534917   33150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 18:31:51.540186   33150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 18:31:51.550405   33150 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 18:31:51.554118   33150 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 18:31:51.554175   33150 kubeadm.go:934] updating node {m02 192.168.39.87 8443 v1.30.3 crio true true} ...
	I0719 18:31:51.554259   33150 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-269108-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.87
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-269108 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 18:31:51.554283   33150 kube-vip.go:115] generating kube-vip config ...
	I0719 18:31:51.554316   33150 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0719 18:31:51.569669   33150 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0719 18:31:51.569727   33150 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0719 18:31:51.569768   33150 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 18:31:51.578609   33150 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0719 18:31:51.578660   33150 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0719 18:31:51.587117   33150 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0719 18:31:51.587143   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0719 18:31:51.587198   33150 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19307-14841/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0719 18:31:51.587213   33150 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19307-14841/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0719 18:31:51.587220   33150 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0719 18:31:51.591409   33150 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0719 18:31:51.591430   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0719 18:31:52.358680   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0719 18:31:52.358762   33150 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0719 18:31:52.364051   33150 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0719 18:31:52.364086   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0719 18:31:52.771265   33150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 18:31:52.785576   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0719 18:31:52.785678   33150 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0719 18:31:52.790198   33150 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0719 18:31:52.790229   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0719 18:31:53.142196   33150 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0719 18:31:53.150942   33150 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0719 18:31:53.165881   33150 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 18:31:53.181165   33150 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0719 18:31:53.196194   33150 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0719 18:31:53.199907   33150 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 18:31:53.211185   33150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 18:31:53.333315   33150 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 18:31:53.350812   33150 host.go:66] Checking if "ha-269108" exists ...
	I0719 18:31:53.351157   33150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:31:53.351194   33150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:31:53.365717   33150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34501
	I0719 18:31:53.366179   33150 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:31:53.366658   33150 main.go:141] libmachine: Using API Version  1
	I0719 18:31:53.366681   33150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:31:53.366990   33150 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:31:53.367182   33150 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:31:53.367333   33150 start.go:317] joinCluster: &{Name:ha-269108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-269108 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.87 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 18:31:53.367456   33150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0719 18:31:53.367484   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:31:53.370868   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:31:53.371295   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:31:53.371322   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:31:53.371484   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:31:53.371660   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:31:53.371847   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:31:53.371977   33150 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	I0719 18:31:53.517186   33150 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.87 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 18:31:53.517234   33150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token iipf47.o9mc099cey98md9b --discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-269108-m02 --control-plane --apiserver-advertise-address=192.168.39.87 --apiserver-bind-port=8443"
	I0719 18:32:15.605699   33150 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token iipf47.o9mc099cey98md9b --discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-269108-m02 --control-plane --apiserver-advertise-address=192.168.39.87 --apiserver-bind-port=8443": (22.088445512s)
	I0719 18:32:15.605734   33150 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0719 18:32:16.056967   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-269108-m02 minikube.k8s.io/updated_at=2024_07_19T18_32_16_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221 minikube.k8s.io/name=ha-269108 minikube.k8s.io/primary=false
	I0719 18:32:16.185445   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-269108-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0719 18:32:16.300180   33150 start.go:319] duration metric: took 22.932844917s to joinCluster
	I0719 18:32:16.300257   33150 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.87 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 18:32:16.300538   33150 config.go:182] Loaded profile config "ha-269108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:32:16.301754   33150 out.go:177] * Verifying Kubernetes components...
	I0719 18:32:16.303049   33150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 18:32:16.567441   33150 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 18:32:16.619902   33150 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 18:32:16.620244   33150 kapi.go:59] client config for ha-269108: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/client.crt", KeyFile:"/home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/client.key", CAFile:"/home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0719 18:32:16.620383   33150 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.163:8443
	I0719 18:32:16.620647   33150 node_ready.go:35] waiting up to 6m0s for node "ha-269108-m02" to be "Ready" ...
	I0719 18:32:16.620730   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:16.620740   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:16.620751   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:16.620760   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:16.629506   33150 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0719 18:32:17.120918   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:17.120941   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:17.120950   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:17.120954   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:17.124699   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:17.621688   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:17.621711   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:17.621719   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:17.621723   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:17.626698   33150 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 18:32:18.121183   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:18.121206   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:18.121214   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:18.121218   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:18.137958   33150 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0719 18:32:18.621462   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:18.621486   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:18.621495   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:18.621502   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:18.625196   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:18.625846   33150 node_ready.go:53] node "ha-269108-m02" has status "Ready":"False"
	I0719 18:32:19.121195   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:19.121222   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:19.121234   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:19.121239   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:19.125364   33150 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 18:32:19.621046   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:19.621065   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:19.621071   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:19.621076   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:19.623975   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:32:20.121666   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:20.121703   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:20.121713   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:20.121718   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:20.126700   33150 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 18:32:20.620940   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:20.620967   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:20.620978   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:20.620983   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:20.624350   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:21.121399   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:21.121420   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:21.121428   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:21.121432   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:21.124874   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:21.125427   33150 node_ready.go:53] node "ha-269108-m02" has status "Ready":"False"
	I0719 18:32:21.621480   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:21.621506   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:21.621518   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:21.621523   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:21.624698   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:22.121722   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:22.121743   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:22.121752   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:22.121756   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:22.125212   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:22.620856   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:22.620877   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:22.620884   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:22.620887   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:22.624701   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:23.121679   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:23.121700   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:23.121710   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:23.121716   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:23.124780   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:23.620976   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:23.620999   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:23.621008   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:23.621012   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:23.624839   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:23.625621   33150 node_ready.go:53] node "ha-269108-m02" has status "Ready":"False"
	I0719 18:32:24.121520   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:24.121544   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:24.121551   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:24.121557   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:24.124875   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:24.621856   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:24.621877   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:24.621885   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:24.621889   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:24.625201   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:25.121308   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:25.121333   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:25.121345   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:25.121353   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:25.124441   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:25.621478   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:25.621503   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:25.621510   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:25.621514   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:25.624753   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:26.121726   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:26.121747   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:26.121754   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:26.121757   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:26.124654   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:32:26.125240   33150 node_ready.go:53] node "ha-269108-m02" has status "Ready":"False"
	I0719 18:32:26.621299   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:26.621320   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:26.621328   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:26.621332   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:26.624263   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:32:27.121542   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:27.121565   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:27.121577   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:27.121581   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:27.125106   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:27.621862   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:27.621880   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:27.621888   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:27.621892   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:27.625552   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:28.121495   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:28.121517   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:28.121526   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:28.121531   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:28.125339   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:28.125816   33150 node_ready.go:53] node "ha-269108-m02" has status "Ready":"False"
	I0719 18:32:28.621571   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:28.621595   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:28.621602   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:28.621607   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:28.625396   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:29.121215   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:29.121236   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:29.121244   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:29.121247   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:29.124255   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:32:29.621182   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:29.621209   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:29.621221   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:29.621226   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:29.624761   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:30.121745   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:30.121776   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:30.121792   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:30.121796   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:30.124764   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:32:30.621862   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:30.621882   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:30.621892   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:30.621898   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:30.628276   33150 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 18:32:30.628857   33150 node_ready.go:53] node "ha-269108-m02" has status "Ready":"False"
	I0719 18:32:31.121057   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:31.121079   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:31.121089   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:31.121094   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:31.124779   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:31.621747   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:31.621769   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:31.621779   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:31.621783   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:31.625218   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:32.121237   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:32.121259   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:32.121267   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:32.121271   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:32.124400   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:32.621474   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:32.621496   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:32.621507   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:32.621511   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:32.624876   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:33.121293   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:33.121326   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:33.121353   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:33.121361   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:33.124364   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:32:33.124991   33150 node_ready.go:53] node "ha-269108-m02" has status "Ready":"False"
	I0719 18:32:33.620825   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:33.620845   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:33.620853   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:33.620855   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:33.623974   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:34.120884   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:34.120909   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:34.120917   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:34.120921   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:34.123974   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:34.620836   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:34.620861   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:34.620871   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:34.620877   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:34.624478   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:35.121110   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:35.121139   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:35.121151   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:35.121156   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:35.124402   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:35.125038   33150 node_ready.go:49] node "ha-269108-m02" has status "Ready":"True"
	I0719 18:32:35.125055   33150 node_ready.go:38] duration metric: took 18.504392019s for node "ha-269108-m02" to be "Ready" ...
	I0719 18:32:35.125065   33150 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 18:32:35.125125   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods
	I0719 18:32:35.125133   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:35.125140   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:35.125144   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:35.129441   33150 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 18:32:35.135047   33150 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-frb69" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:35.135129   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-frb69
	I0719 18:32:35.135138   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:35.135146   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:35.135150   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:35.138040   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:32:35.138615   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:32:35.138630   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:35.138643   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:35.138647   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:35.140697   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:32:35.141184   33150 pod_ready.go:92] pod "coredns-7db6d8ff4d-frb69" in "kube-system" namespace has status "Ready":"True"
	I0719 18:32:35.141200   33150 pod_ready.go:81] duration metric: took 6.134036ms for pod "coredns-7db6d8ff4d-frb69" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:35.141208   33150 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zxh8h" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:35.141250   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-zxh8h
	I0719 18:32:35.141257   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:35.141263   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:35.141267   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:35.143552   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:32:35.144142   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:32:35.144167   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:35.144174   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:35.144179   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:35.146257   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:32:35.146773   33150 pod_ready.go:92] pod "coredns-7db6d8ff4d-zxh8h" in "kube-system" namespace has status "Ready":"True"
	I0719 18:32:35.146789   33150 pod_ready.go:81] duration metric: took 5.574926ms for pod "coredns-7db6d8ff4d-zxh8h" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:35.146800   33150 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-269108" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:35.146859   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/etcd-ha-269108
	I0719 18:32:35.146868   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:35.146878   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:35.146889   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:35.148706   33150 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 18:32:35.149127   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:32:35.149141   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:35.149148   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:35.149153   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:35.150865   33150 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 18:32:35.151218   33150 pod_ready.go:92] pod "etcd-ha-269108" in "kube-system" namespace has status "Ready":"True"
	I0719 18:32:35.151231   33150 pod_ready.go:81] duration metric: took 4.424138ms for pod "etcd-ha-269108" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:35.151241   33150 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-269108-m02" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:35.151288   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/etcd-ha-269108-m02
	I0719 18:32:35.151298   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:35.151307   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:35.151313   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:35.153285   33150 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 18:32:35.153799   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:35.153812   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:35.153818   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:35.153823   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:35.155775   33150 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 18:32:35.156211   33150 pod_ready.go:92] pod "etcd-ha-269108-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 18:32:35.156228   33150 pod_ready.go:81] duration metric: took 4.980654ms for pod "etcd-ha-269108-m02" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:35.156243   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-269108" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:35.321660   33150 request.go:629] Waited for 165.362561ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-269108
	I0719 18:32:35.321748   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-269108
	I0719 18:32:35.321760   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:35.321771   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:35.321779   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:35.325238   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:35.521120   33150 request.go:629] Waited for 195.275453ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:32:35.521183   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:32:35.521188   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:35.521195   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:35.521200   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:35.524114   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:32:35.524681   33150 pod_ready.go:92] pod "kube-apiserver-ha-269108" in "kube-system" namespace has status "Ready":"True"
	I0719 18:32:35.524698   33150 pod_ready.go:81] duration metric: took 368.449533ms for pod "kube-apiserver-ha-269108" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:35.524708   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-269108-m02" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:35.721824   33150 request.go:629] Waited for 197.054878ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-269108-m02
	I0719 18:32:35.721881   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-269108-m02
	I0719 18:32:35.721886   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:35.721903   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:35.721910   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:35.725507   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:35.921084   33150 request.go:629] Waited for 194.711523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:35.921172   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:35.921185   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:35.921195   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:35.921211   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:35.923984   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:32:35.924837   33150 pod_ready.go:92] pod "kube-apiserver-ha-269108-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 18:32:35.924855   33150 pod_ready.go:81] duration metric: took 400.138736ms for pod "kube-apiserver-ha-269108-m02" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:35.924864   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-269108" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:36.121883   33150 request.go:629] Waited for 196.949216ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-269108
	I0719 18:32:36.121939   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-269108
	I0719 18:32:36.121946   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:36.121955   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:36.121960   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:36.125180   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:36.321250   33150 request.go:629] Waited for 195.30303ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:32:36.321325   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:32:36.321331   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:36.321339   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:36.321344   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:36.324367   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:36.324924   33150 pod_ready.go:92] pod "kube-controller-manager-ha-269108" in "kube-system" namespace has status "Ready":"True"
	I0719 18:32:36.324941   33150 pod_ready.go:81] duration metric: took 400.071173ms for pod "kube-controller-manager-ha-269108" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:36.324954   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-269108-m02" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:36.521100   33150 request.go:629] Waited for 196.078894ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-269108-m02
	I0719 18:32:36.521164   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-269108-m02
	I0719 18:32:36.521170   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:36.521181   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:36.521189   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:36.524882   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:36.721824   33150 request.go:629] Waited for 196.361603ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:36.721882   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:36.721897   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:36.721905   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:36.721911   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:36.725292   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:36.725975   33150 pod_ready.go:92] pod "kube-controller-manager-ha-269108-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 18:32:36.725997   33150 pod_ready.go:81] duration metric: took 401.035221ms for pod "kube-controller-manager-ha-269108-m02" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:36.726010   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pkjns" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:36.922016   33150 request.go:629] Waited for 195.942105ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pkjns
	I0719 18:32:36.922073   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pkjns
	I0719 18:32:36.922078   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:36.922085   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:36.922088   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:36.925675   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:37.121654   33150 request.go:629] Waited for 195.361094ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:32:37.121714   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:32:37.121721   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:37.121730   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:37.121734   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:37.125342   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:37.125923   33150 pod_ready.go:92] pod "kube-proxy-pkjns" in "kube-system" namespace has status "Ready":"True"
	I0719 18:32:37.125942   33150 pod_ready.go:81] duration metric: took 399.924914ms for pod "kube-proxy-pkjns" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:37.125954   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vr7bz" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:37.321089   33150 request.go:629] Waited for 195.06817ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vr7bz
	I0719 18:32:37.321153   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vr7bz
	I0719 18:32:37.321159   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:37.321166   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:37.321169   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:37.325503   33150 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 18:32:37.521495   33150 request.go:629] Waited for 195.35182ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:37.521552   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:37.521557   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:37.521565   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:37.521569   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:37.525645   33150 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 18:32:37.526356   33150 pod_ready.go:92] pod "kube-proxy-vr7bz" in "kube-system" namespace has status "Ready":"True"
	I0719 18:32:37.526371   33150 pod_ready.go:81] duration metric: took 400.410651ms for pod "kube-proxy-vr7bz" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:37.526389   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-269108" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:37.721967   33150 request.go:629] Waited for 195.499184ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-269108
	I0719 18:32:37.722017   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-269108
	I0719 18:32:37.722022   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:37.722029   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:37.722034   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:37.725392   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:37.921256   33150 request.go:629] Waited for 195.292837ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:32:37.921306   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:32:37.921311   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:37.921320   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:37.921325   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:37.924807   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:37.925240   33150 pod_ready.go:92] pod "kube-scheduler-ha-269108" in "kube-system" namespace has status "Ready":"True"
	I0719 18:32:37.925260   33150 pod_ready.go:81] duration metric: took 398.863069ms for pod "kube-scheduler-ha-269108" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:37.925272   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-269108-m02" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:38.121456   33150 request.go:629] Waited for 196.116624ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-269108-m02
	I0719 18:32:38.121515   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-269108-m02
	I0719 18:32:38.121520   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:38.121531   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:38.121538   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:38.124785   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:38.321677   33150 request.go:629] Waited for 196.388232ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:38.321743   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:38.321750   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:38.321759   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:38.321763   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:38.325635   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:38.326135   33150 pod_ready.go:92] pod "kube-scheduler-ha-269108-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 18:32:38.326152   33150 pod_ready.go:81] duration metric: took 400.872775ms for pod "kube-scheduler-ha-269108-m02" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:38.326162   33150 pod_ready.go:38] duration metric: took 3.201086857s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 18:32:38.326176   33150 api_server.go:52] waiting for apiserver process to appear ...
	I0719 18:32:38.326221   33150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 18:32:38.341378   33150 api_server.go:72] duration metric: took 22.041085997s to wait for apiserver process to appear ...
	I0719 18:32:38.341404   33150 api_server.go:88] waiting for apiserver healthz status ...
	I0719 18:32:38.341423   33150 api_server.go:253] Checking apiserver healthz at https://192.168.39.163:8443/healthz ...
	I0719 18:32:38.345678   33150 api_server.go:279] https://192.168.39.163:8443/healthz returned 200:
	ok
	I0719 18:32:38.345745   33150 round_trippers.go:463] GET https://192.168.39.163:8443/version
	I0719 18:32:38.345752   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:38.345764   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:38.345774   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:38.346614   33150 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0719 18:32:38.346758   33150 api_server.go:141] control plane version: v1.30.3
	I0719 18:32:38.346781   33150 api_server.go:131] duration metric: took 5.370366ms to wait for apiserver health ...
	I0719 18:32:38.346789   33150 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 18:32:38.521424   33150 request.go:629] Waited for 174.561181ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods
	I0719 18:32:38.521480   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods
	I0719 18:32:38.521486   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:38.521493   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:38.521497   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:38.527885   33150 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 18:32:38.532086   33150 system_pods.go:59] 17 kube-system pods found
	I0719 18:32:38.532108   33150 system_pods.go:61] "coredns-7db6d8ff4d-frb69" [bd2dc6fa-ab54-44e1-b3b1-3e63af76140b] Running
	I0719 18:32:38.532114   33150 system_pods.go:61] "coredns-7db6d8ff4d-zxh8h" [1016c72c-a0e0-41fe-8f64-1ba03d7ce1c8] Running
	I0719 18:32:38.532117   33150 system_pods.go:61] "etcd-ha-269108" [dc807567-f8b9-44c5-abda-6dd170f77bf2] Running
	I0719 18:32:38.532121   33150 system_pods.go:61] "etcd-ha-269108-m02" [5f17ee06-e4d1-4e51-a8de-80bf7d5e644b] Running
	I0719 18:32:38.532124   33150 system_pods.go:61] "kindnet-4lhkr" [92211e52-9a67-4b01-a4af-0769596d7103] Running
	I0719 18:32:38.532127   33150 system_pods.go:61] "kindnet-fs5wk" [f5461a69-fd24-47dd-8376-f8492c8dad0b] Running
	I0719 18:32:38.532130   33150 system_pods.go:61] "kube-apiserver-ha-269108" [99659542-3ba1-4035-83a1-8729586d2cc0] Running
	I0719 18:32:38.532134   33150 system_pods.go:61] "kube-apiserver-ha-269108-m02" [2637cd4e-b5b5-4d1d-9913-c5cd000b9d48] Running
	I0719 18:32:38.532137   33150 system_pods.go:61] "kube-controller-manager-ha-269108" [def21bee-d990-48ec-88d3-5f6b8aa1f7c9] Running
	I0719 18:32:38.532140   33150 system_pods.go:61] "kube-controller-manager-ha-269108-m02" [e4074996-8a87-4788-af15-172f1ac7fe67] Running
	I0719 18:32:38.532143   33150 system_pods.go:61] "kube-proxy-pkjns" [f97d4b6b-eb80-46c3-9513-ef6e9af5318e] Running
	I0719 18:32:38.532146   33150 system_pods.go:61] "kube-proxy-vr7bz" [cae5da2f-2ee1-4b82-8471-479758fbab4f] Running
	I0719 18:32:38.532149   33150 system_pods.go:61] "kube-scheduler-ha-269108" [07b87e93-0008-46b6-90f2-3e933bb568b0] Running
	I0719 18:32:38.532152   33150 system_pods.go:61] "kube-scheduler-ha-269108-m02" [1dbd98dd-a9b7-464e-9380-d15ee99ab168] Running
	I0719 18:32:38.532155   33150 system_pods.go:61] "kube-vip-ha-269108" [a72f4143-401a-41a3-8adc-4cfc75369797] Running
	I0719 18:32:38.532157   33150 system_pods.go:61] "kube-vip-ha-269108-m02" [5e41e24e-112e-420e-b359-12a6540e9e34] Running
	I0719 18:32:38.532160   33150 system_pods.go:61] "storage-provisioner" [38aaccd6-4fb9-4744-b502-261ebeaf476a] Running
	I0719 18:32:38.532165   33150 system_pods.go:74] duration metric: took 185.367616ms to wait for pod list to return data ...
	I0719 18:32:38.532174   33150 default_sa.go:34] waiting for default service account to be created ...
	I0719 18:32:38.721593   33150 request.go:629] Waited for 189.343841ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/default/serviceaccounts
	I0719 18:32:38.721651   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/default/serviceaccounts
	I0719 18:32:38.721657   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:38.721666   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:38.721672   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:38.724957   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:38.725179   33150 default_sa.go:45] found service account: "default"
	I0719 18:32:38.725204   33150 default_sa.go:55] duration metric: took 193.024673ms for default service account to be created ...
	I0719 18:32:38.725212   33150 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 18:32:38.921656   33150 request.go:629] Waited for 196.380607ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods
	I0719 18:32:38.921728   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods
	I0719 18:32:38.921737   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:38.921744   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:38.921750   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:38.927970   33150 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 18:32:38.931858   33150 system_pods.go:86] 17 kube-system pods found
	I0719 18:32:38.931883   33150 system_pods.go:89] "coredns-7db6d8ff4d-frb69" [bd2dc6fa-ab54-44e1-b3b1-3e63af76140b] Running
	I0719 18:32:38.931889   33150 system_pods.go:89] "coredns-7db6d8ff4d-zxh8h" [1016c72c-a0e0-41fe-8f64-1ba03d7ce1c8] Running
	I0719 18:32:38.931893   33150 system_pods.go:89] "etcd-ha-269108" [dc807567-f8b9-44c5-abda-6dd170f77bf2] Running
	I0719 18:32:38.931898   33150 system_pods.go:89] "etcd-ha-269108-m02" [5f17ee06-e4d1-4e51-a8de-80bf7d5e644b] Running
	I0719 18:32:38.931902   33150 system_pods.go:89] "kindnet-4lhkr" [92211e52-9a67-4b01-a4af-0769596d7103] Running
	I0719 18:32:38.931906   33150 system_pods.go:89] "kindnet-fs5wk" [f5461a69-fd24-47dd-8376-f8492c8dad0b] Running
	I0719 18:32:38.931910   33150 system_pods.go:89] "kube-apiserver-ha-269108" [99659542-3ba1-4035-83a1-8729586d2cc0] Running
	I0719 18:32:38.931914   33150 system_pods.go:89] "kube-apiserver-ha-269108-m02" [2637cd4e-b5b5-4d1d-9913-c5cd000b9d48] Running
	I0719 18:32:38.931919   33150 system_pods.go:89] "kube-controller-manager-ha-269108" [def21bee-d990-48ec-88d3-5f6b8aa1f7c9] Running
	I0719 18:32:38.931925   33150 system_pods.go:89] "kube-controller-manager-ha-269108-m02" [e4074996-8a87-4788-af15-172f1ac7fe67] Running
	I0719 18:32:38.931929   33150 system_pods.go:89] "kube-proxy-pkjns" [f97d4b6b-eb80-46c3-9513-ef6e9af5318e] Running
	I0719 18:32:38.931935   33150 system_pods.go:89] "kube-proxy-vr7bz" [cae5da2f-2ee1-4b82-8471-479758fbab4f] Running
	I0719 18:32:38.931939   33150 system_pods.go:89] "kube-scheduler-ha-269108" [07b87e93-0008-46b6-90f2-3e933bb568b0] Running
	I0719 18:32:38.931945   33150 system_pods.go:89] "kube-scheduler-ha-269108-m02" [1dbd98dd-a9b7-464e-9380-d15ee99ab168] Running
	I0719 18:32:38.931949   33150 system_pods.go:89] "kube-vip-ha-269108" [a72f4143-401a-41a3-8adc-4cfc75369797] Running
	I0719 18:32:38.931955   33150 system_pods.go:89] "kube-vip-ha-269108-m02" [5e41e24e-112e-420e-b359-12a6540e9e34] Running
	I0719 18:32:38.931959   33150 system_pods.go:89] "storage-provisioner" [38aaccd6-4fb9-4744-b502-261ebeaf476a] Running
	I0719 18:32:38.931968   33150 system_pods.go:126] duration metric: took 206.748824ms to wait for k8s-apps to be running ...
	I0719 18:32:38.931977   33150 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 18:32:38.932020   33150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 18:32:38.946614   33150 system_svc.go:56] duration metric: took 14.622933ms WaitForService to wait for kubelet
	I0719 18:32:38.946649   33150 kubeadm.go:582] duration metric: took 22.646358973s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 18:32:38.946675   33150 node_conditions.go:102] verifying NodePressure condition ...
	I0719 18:32:39.122087   33150 request.go:629] Waited for 175.330722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes
	I0719 18:32:39.122138   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes
	I0719 18:32:39.122144   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:39.122154   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:39.122164   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:39.125738   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:39.126366   33150 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 18:32:39.126392   33150 node_conditions.go:123] node cpu capacity is 2
	I0719 18:32:39.126402   33150 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 18:32:39.126406   33150 node_conditions.go:123] node cpu capacity is 2
	I0719 18:32:39.126410   33150 node_conditions.go:105] duration metric: took 179.729295ms to run NodePressure ...
	I0719 18:32:39.126423   33150 start.go:241] waiting for startup goroutines ...
	I0719 18:32:39.126447   33150 start.go:255] writing updated cluster config ...
	I0719 18:32:39.128558   33150 out.go:177] 
	I0719 18:32:39.130063   33150 config.go:182] Loaded profile config "ha-269108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:32:39.130158   33150 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/config.json ...
	I0719 18:32:39.131810   33150 out.go:177] * Starting "ha-269108-m03" control-plane node in "ha-269108" cluster
	I0719 18:32:39.132991   33150 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 18:32:39.133009   33150 cache.go:56] Caching tarball of preloaded images
	I0719 18:32:39.133099   33150 preload.go:172] Found /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 18:32:39.133112   33150 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 18:32:39.133207   33150 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/config.json ...
	I0719 18:32:39.133390   33150 start.go:360] acquireMachinesLock for ha-269108-m03: {Name:mk45aeee801587237bb965fa260501e9fac6f233 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 18:32:39.133438   33150 start.go:364] duration metric: took 27.66µs to acquireMachinesLock for "ha-269108-m03"
	I0719 18:32:39.133462   33150 start.go:93] Provisioning new machine with config: &{Name:ha-269108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-269108 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.87 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 18:32:39.133580   33150 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0719 18:32:39.135010   33150 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 18:32:39.135081   33150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:32:39.135110   33150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:32:39.149933   33150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37753
	I0719 18:32:39.150332   33150 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:32:39.150921   33150 main.go:141] libmachine: Using API Version  1
	I0719 18:32:39.150940   33150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:32:39.151262   33150 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:32:39.151462   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetMachineName
	I0719 18:32:39.151612   33150 main.go:141] libmachine: (ha-269108-m03) Calling .DriverName
	I0719 18:32:39.151764   33150 start.go:159] libmachine.API.Create for "ha-269108" (driver="kvm2")
	I0719 18:32:39.151791   33150 client.go:168] LocalClient.Create starting
	I0719 18:32:39.151817   33150 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem
	I0719 18:32:39.151861   33150 main.go:141] libmachine: Decoding PEM data...
	I0719 18:32:39.151876   33150 main.go:141] libmachine: Parsing certificate...
	I0719 18:32:39.151921   33150 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem
	I0719 18:32:39.151938   33150 main.go:141] libmachine: Decoding PEM data...
	I0719 18:32:39.151949   33150 main.go:141] libmachine: Parsing certificate...
	I0719 18:32:39.151968   33150 main.go:141] libmachine: Running pre-create checks...
	I0719 18:32:39.151976   33150 main.go:141] libmachine: (ha-269108-m03) Calling .PreCreateCheck
	I0719 18:32:39.152135   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetConfigRaw
	I0719 18:32:39.152519   33150 main.go:141] libmachine: Creating machine...
	I0719 18:32:39.152532   33150 main.go:141] libmachine: (ha-269108-m03) Calling .Create
	I0719 18:32:39.152733   33150 main.go:141] libmachine: (ha-269108-m03) Creating KVM machine...
	I0719 18:32:39.153913   33150 main.go:141] libmachine: (ha-269108-m03) DBG | found existing default KVM network
	I0719 18:32:39.154118   33150 main.go:141] libmachine: (ha-269108-m03) DBG | found existing private KVM network mk-ha-269108
	I0719 18:32:39.154268   33150 main.go:141] libmachine: (ha-269108-m03) Setting up store path in /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m03 ...
	I0719 18:32:39.154292   33150 main.go:141] libmachine: (ha-269108-m03) Building disk image from file:///home/jenkins/minikube-integration/19307-14841/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 18:32:39.154364   33150 main.go:141] libmachine: (ha-269108-m03) DBG | I0719 18:32:39.154269   33893 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 18:32:39.154434   33150 main.go:141] libmachine: (ha-269108-m03) Downloading /home/jenkins/minikube-integration/19307-14841/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19307-14841/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 18:32:39.373669   33150 main.go:141] libmachine: (ha-269108-m03) DBG | I0719 18:32:39.373525   33893 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m03/id_rsa...
	I0719 18:32:39.495402   33150 main.go:141] libmachine: (ha-269108-m03) DBG | I0719 18:32:39.495267   33893 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m03/ha-269108-m03.rawdisk...
	I0719 18:32:39.495432   33150 main.go:141] libmachine: (ha-269108-m03) DBG | Writing magic tar header
	I0719 18:32:39.495446   33150 main.go:141] libmachine: (ha-269108-m03) DBG | Writing SSH key tar header
	I0719 18:32:39.495459   33150 main.go:141] libmachine: (ha-269108-m03) DBG | I0719 18:32:39.495387   33893 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m03 ...
	I0719 18:32:39.495475   33150 main.go:141] libmachine: (ha-269108-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m03
	I0719 18:32:39.495573   33150 main.go:141] libmachine: (ha-269108-m03) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m03 (perms=drwx------)
	I0719 18:32:39.495595   33150 main.go:141] libmachine: (ha-269108-m03) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841/.minikube/machines (perms=drwxr-xr-x)
	I0719 18:32:39.495606   33150 main.go:141] libmachine: (ha-269108-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841/.minikube/machines
	I0719 18:32:39.495636   33150 main.go:141] libmachine: (ha-269108-m03) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841/.minikube (perms=drwxr-xr-x)
	I0719 18:32:39.495655   33150 main.go:141] libmachine: (ha-269108-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 18:32:39.495674   33150 main.go:141] libmachine: (ha-269108-m03) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841 (perms=drwxrwxr-x)
	I0719 18:32:39.495691   33150 main.go:141] libmachine: (ha-269108-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0719 18:32:39.495702   33150 main.go:141] libmachine: (ha-269108-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0719 18:32:39.495713   33150 main.go:141] libmachine: (ha-269108-m03) Creating domain...
	I0719 18:32:39.495731   33150 main.go:141] libmachine: (ha-269108-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841
	I0719 18:32:39.495744   33150 main.go:141] libmachine: (ha-269108-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0719 18:32:39.495758   33150 main.go:141] libmachine: (ha-269108-m03) DBG | Checking permissions on dir: /home/jenkins
	I0719 18:32:39.495770   33150 main.go:141] libmachine: (ha-269108-m03) DBG | Checking permissions on dir: /home
	I0719 18:32:39.495783   33150 main.go:141] libmachine: (ha-269108-m03) DBG | Skipping /home - not owner
	I0719 18:32:39.496629   33150 main.go:141] libmachine: (ha-269108-m03) define libvirt domain using xml: 
	I0719 18:32:39.496652   33150 main.go:141] libmachine: (ha-269108-m03) <domain type='kvm'>
	I0719 18:32:39.496682   33150 main.go:141] libmachine: (ha-269108-m03)   <name>ha-269108-m03</name>
	I0719 18:32:39.496705   33150 main.go:141] libmachine: (ha-269108-m03)   <memory unit='MiB'>2200</memory>
	I0719 18:32:39.496715   33150 main.go:141] libmachine: (ha-269108-m03)   <vcpu>2</vcpu>
	I0719 18:32:39.496727   33150 main.go:141] libmachine: (ha-269108-m03)   <features>
	I0719 18:32:39.496739   33150 main.go:141] libmachine: (ha-269108-m03)     <acpi/>
	I0719 18:32:39.496746   33150 main.go:141] libmachine: (ha-269108-m03)     <apic/>
	I0719 18:32:39.496755   33150 main.go:141] libmachine: (ha-269108-m03)     <pae/>
	I0719 18:32:39.496763   33150 main.go:141] libmachine: (ha-269108-m03)     
	I0719 18:32:39.496768   33150 main.go:141] libmachine: (ha-269108-m03)   </features>
	I0719 18:32:39.496775   33150 main.go:141] libmachine: (ha-269108-m03)   <cpu mode='host-passthrough'>
	I0719 18:32:39.496781   33150 main.go:141] libmachine: (ha-269108-m03)   
	I0719 18:32:39.496790   33150 main.go:141] libmachine: (ha-269108-m03)   </cpu>
	I0719 18:32:39.496798   33150 main.go:141] libmachine: (ha-269108-m03)   <os>
	I0719 18:32:39.496812   33150 main.go:141] libmachine: (ha-269108-m03)     <type>hvm</type>
	I0719 18:32:39.496824   33150 main.go:141] libmachine: (ha-269108-m03)     <boot dev='cdrom'/>
	I0719 18:32:39.496834   33150 main.go:141] libmachine: (ha-269108-m03)     <boot dev='hd'/>
	I0719 18:32:39.496845   33150 main.go:141] libmachine: (ha-269108-m03)     <bootmenu enable='no'/>
	I0719 18:32:39.496855   33150 main.go:141] libmachine: (ha-269108-m03)   </os>
	I0719 18:32:39.496862   33150 main.go:141] libmachine: (ha-269108-m03)   <devices>
	I0719 18:32:39.496875   33150 main.go:141] libmachine: (ha-269108-m03)     <disk type='file' device='cdrom'>
	I0719 18:32:39.496900   33150 main.go:141] libmachine: (ha-269108-m03)       <source file='/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m03/boot2docker.iso'/>
	I0719 18:32:39.496918   33150 main.go:141] libmachine: (ha-269108-m03)       <target dev='hdc' bus='scsi'/>
	I0719 18:32:39.496930   33150 main.go:141] libmachine: (ha-269108-m03)       <readonly/>
	I0719 18:32:39.496939   33150 main.go:141] libmachine: (ha-269108-m03)     </disk>
	I0719 18:32:39.496948   33150 main.go:141] libmachine: (ha-269108-m03)     <disk type='file' device='disk'>
	I0719 18:32:39.496956   33150 main.go:141] libmachine: (ha-269108-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0719 18:32:39.496964   33150 main.go:141] libmachine: (ha-269108-m03)       <source file='/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m03/ha-269108-m03.rawdisk'/>
	I0719 18:32:39.496971   33150 main.go:141] libmachine: (ha-269108-m03)       <target dev='hda' bus='virtio'/>
	I0719 18:32:39.496977   33150 main.go:141] libmachine: (ha-269108-m03)     </disk>
	I0719 18:32:39.496984   33150 main.go:141] libmachine: (ha-269108-m03)     <interface type='network'>
	I0719 18:32:39.496991   33150 main.go:141] libmachine: (ha-269108-m03)       <source network='mk-ha-269108'/>
	I0719 18:32:39.497001   33150 main.go:141] libmachine: (ha-269108-m03)       <model type='virtio'/>
	I0719 18:32:39.497025   33150 main.go:141] libmachine: (ha-269108-m03)     </interface>
	I0719 18:32:39.497042   33150 main.go:141] libmachine: (ha-269108-m03)     <interface type='network'>
	I0719 18:32:39.497058   33150 main.go:141] libmachine: (ha-269108-m03)       <source network='default'/>
	I0719 18:32:39.497069   33150 main.go:141] libmachine: (ha-269108-m03)       <model type='virtio'/>
	I0719 18:32:39.497083   33150 main.go:141] libmachine: (ha-269108-m03)     </interface>
	I0719 18:32:39.497093   33150 main.go:141] libmachine: (ha-269108-m03)     <serial type='pty'>
	I0719 18:32:39.497105   33150 main.go:141] libmachine: (ha-269108-m03)       <target port='0'/>
	I0719 18:32:39.497116   33150 main.go:141] libmachine: (ha-269108-m03)     </serial>
	I0719 18:32:39.497127   33150 main.go:141] libmachine: (ha-269108-m03)     <console type='pty'>
	I0719 18:32:39.497135   33150 main.go:141] libmachine: (ha-269108-m03)       <target type='serial' port='0'/>
	I0719 18:32:39.497147   33150 main.go:141] libmachine: (ha-269108-m03)     </console>
	I0719 18:32:39.497157   33150 main.go:141] libmachine: (ha-269108-m03)     <rng model='virtio'>
	I0719 18:32:39.497171   33150 main.go:141] libmachine: (ha-269108-m03)       <backend model='random'>/dev/random</backend>
	I0719 18:32:39.497180   33150 main.go:141] libmachine: (ha-269108-m03)     </rng>
	I0719 18:32:39.497193   33150 main.go:141] libmachine: (ha-269108-m03)     
	I0719 18:32:39.497200   33150 main.go:141] libmachine: (ha-269108-m03)     
	I0719 18:32:39.497205   33150 main.go:141] libmachine: (ha-269108-m03)   </devices>
	I0719 18:32:39.497214   33150 main.go:141] libmachine: (ha-269108-m03) </domain>
	I0719 18:32:39.497227   33150 main.go:141] libmachine: (ha-269108-m03) 
	I0719 18:32:39.503657   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:5b:75:d6 in network default
	I0719 18:32:39.504300   33150 main.go:141] libmachine: (ha-269108-m03) Ensuring networks are active...
	I0719 18:32:39.504323   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:32:39.505005   33150 main.go:141] libmachine: (ha-269108-m03) Ensuring network default is active
	I0719 18:32:39.505341   33150 main.go:141] libmachine: (ha-269108-m03) Ensuring network mk-ha-269108 is active
	I0719 18:32:39.505684   33150 main.go:141] libmachine: (ha-269108-m03) Getting domain xml...
	I0719 18:32:39.506378   33150 main.go:141] libmachine: (ha-269108-m03) Creating domain...
	I0719 18:32:40.725450   33150 main.go:141] libmachine: (ha-269108-m03) Waiting to get IP...
	I0719 18:32:40.726366   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:32:40.726797   33150 main.go:141] libmachine: (ha-269108-m03) DBG | unable to find current IP address of domain ha-269108-m03 in network mk-ha-269108
	I0719 18:32:40.726840   33150 main.go:141] libmachine: (ha-269108-m03) DBG | I0719 18:32:40.726783   33893 retry.go:31] will retry after 210.192396ms: waiting for machine to come up
	I0719 18:32:40.938058   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:32:40.938443   33150 main.go:141] libmachine: (ha-269108-m03) DBG | unable to find current IP address of domain ha-269108-m03 in network mk-ha-269108
	I0719 18:32:40.938469   33150 main.go:141] libmachine: (ha-269108-m03) DBG | I0719 18:32:40.938388   33893 retry.go:31] will retry after 374.294477ms: waiting for machine to come up
	I0719 18:32:41.313929   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:32:41.314394   33150 main.go:141] libmachine: (ha-269108-m03) DBG | unable to find current IP address of domain ha-269108-m03 in network mk-ha-269108
	I0719 18:32:41.314423   33150 main.go:141] libmachine: (ha-269108-m03) DBG | I0719 18:32:41.314330   33893 retry.go:31] will retry after 397.700343ms: waiting for machine to come up
	I0719 18:32:41.713844   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:32:41.714224   33150 main.go:141] libmachine: (ha-269108-m03) DBG | unable to find current IP address of domain ha-269108-m03 in network mk-ha-269108
	I0719 18:32:41.714247   33150 main.go:141] libmachine: (ha-269108-m03) DBG | I0719 18:32:41.714184   33893 retry.go:31] will retry after 460.580971ms: waiting for machine to come up
	I0719 18:32:42.176610   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:32:42.177142   33150 main.go:141] libmachine: (ha-269108-m03) DBG | unable to find current IP address of domain ha-269108-m03 in network mk-ha-269108
	I0719 18:32:42.177171   33150 main.go:141] libmachine: (ha-269108-m03) DBG | I0719 18:32:42.177087   33893 retry.go:31] will retry after 646.320778ms: waiting for machine to come up
	I0719 18:32:42.824988   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:32:42.825465   33150 main.go:141] libmachine: (ha-269108-m03) DBG | unable to find current IP address of domain ha-269108-m03 in network mk-ha-269108
	I0719 18:32:42.825491   33150 main.go:141] libmachine: (ha-269108-m03) DBG | I0719 18:32:42.825413   33893 retry.go:31] will retry after 892.756613ms: waiting for machine to come up
	I0719 18:32:43.720467   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:32:43.720914   33150 main.go:141] libmachine: (ha-269108-m03) DBG | unable to find current IP address of domain ha-269108-m03 in network mk-ha-269108
	I0719 18:32:43.720940   33150 main.go:141] libmachine: (ha-269108-m03) DBG | I0719 18:32:43.720863   33893 retry.go:31] will retry after 1.152338271s: waiting for machine to come up
	I0719 18:32:44.875127   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:32:44.875578   33150 main.go:141] libmachine: (ha-269108-m03) DBG | unable to find current IP address of domain ha-269108-m03 in network mk-ha-269108
	I0719 18:32:44.875598   33150 main.go:141] libmachine: (ha-269108-m03) DBG | I0719 18:32:44.875543   33893 retry.go:31] will retry after 1.065599322s: waiting for machine to come up
	I0719 18:32:45.942620   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:32:45.942976   33150 main.go:141] libmachine: (ha-269108-m03) DBG | unable to find current IP address of domain ha-269108-m03 in network mk-ha-269108
	I0719 18:32:45.943004   33150 main.go:141] libmachine: (ha-269108-m03) DBG | I0719 18:32:45.942938   33893 retry.go:31] will retry after 1.77360277s: waiting for machine to come up
	I0719 18:32:47.718554   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:32:47.719068   33150 main.go:141] libmachine: (ha-269108-m03) DBG | unable to find current IP address of domain ha-269108-m03 in network mk-ha-269108
	I0719 18:32:47.719105   33150 main.go:141] libmachine: (ha-269108-m03) DBG | I0719 18:32:47.719009   33893 retry.go:31] will retry after 2.016398592s: waiting for machine to come up
	I0719 18:32:49.737314   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:32:49.737793   33150 main.go:141] libmachine: (ha-269108-m03) DBG | unable to find current IP address of domain ha-269108-m03 in network mk-ha-269108
	I0719 18:32:49.737824   33150 main.go:141] libmachine: (ha-269108-m03) DBG | I0719 18:32:49.737756   33893 retry.go:31] will retry after 1.850110184s: waiting for machine to come up
	I0719 18:32:51.589604   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:32:51.589975   33150 main.go:141] libmachine: (ha-269108-m03) DBG | unable to find current IP address of domain ha-269108-m03 in network mk-ha-269108
	I0719 18:32:51.590004   33150 main.go:141] libmachine: (ha-269108-m03) DBG | I0719 18:32:51.589928   33893 retry.go:31] will retry after 2.429018042s: waiting for machine to come up
	I0719 18:32:54.020850   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:32:54.021367   33150 main.go:141] libmachine: (ha-269108-m03) DBG | unable to find current IP address of domain ha-269108-m03 in network mk-ha-269108
	I0719 18:32:54.021397   33150 main.go:141] libmachine: (ha-269108-m03) DBG | I0719 18:32:54.021304   33893 retry.go:31] will retry after 3.578950672s: waiting for machine to come up
	I0719 18:32:57.601544   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:32:57.601964   33150 main.go:141] libmachine: (ha-269108-m03) DBG | unable to find current IP address of domain ha-269108-m03 in network mk-ha-269108
	I0719 18:32:57.601991   33150 main.go:141] libmachine: (ha-269108-m03) DBG | I0719 18:32:57.601920   33893 retry.go:31] will retry after 5.362333357s: waiting for machine to come up
	I0719 18:33:02.967092   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:02.967642   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has current primary IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:02.967669   33150 main.go:141] libmachine: (ha-269108-m03) Found IP for machine: 192.168.39.111
	I0719 18:33:02.967684   33150 main.go:141] libmachine: (ha-269108-m03) Reserving static IP address...
	I0719 18:33:02.968141   33150 main.go:141] libmachine: (ha-269108-m03) DBG | unable to find host DHCP lease matching {name: "ha-269108-m03", mac: "52:54:00:d3:60:8c", ip: "192.168.39.111"} in network mk-ha-269108
	I0719 18:33:03.039511   33150 main.go:141] libmachine: (ha-269108-m03) DBG | Getting to WaitForSSH function...
	I0719 18:33:03.039540   33150 main.go:141] libmachine: (ha-269108-m03) Reserved static IP address: 192.168.39.111
	I0719 18:33:03.039553   33150 main.go:141] libmachine: (ha-269108-m03) Waiting for SSH to be available...
	I0719 18:33:03.042009   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:03.042490   33150 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d3:60:8c}
	I0719 18:33:03.042520   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:03.042653   33150 main.go:141] libmachine: (ha-269108-m03) DBG | Using SSH client type: external
	I0719 18:33:03.042665   33150 main.go:141] libmachine: (ha-269108-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m03/id_rsa (-rw-------)
	I0719 18:33:03.042724   33150 main.go:141] libmachine: (ha-269108-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 18:33:03.042742   33150 main.go:141] libmachine: (ha-269108-m03) DBG | About to run SSH command:
	I0719 18:33:03.042757   33150 main.go:141] libmachine: (ha-269108-m03) DBG | exit 0
	I0719 18:33:03.175598   33150 main.go:141] libmachine: (ha-269108-m03) DBG | SSH cmd err, output: <nil>: 
	I0719 18:33:03.175866   33150 main.go:141] libmachine: (ha-269108-m03) KVM machine creation complete!
	I0719 18:33:03.176232   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetConfigRaw
	I0719 18:33:03.176754   33150 main.go:141] libmachine: (ha-269108-m03) Calling .DriverName
	I0719 18:33:03.176971   33150 main.go:141] libmachine: (ha-269108-m03) Calling .DriverName
	I0719 18:33:03.177153   33150 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0719 18:33:03.177166   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetState
	I0719 18:33:03.178434   33150 main.go:141] libmachine: Detecting operating system of created instance...
	I0719 18:33:03.178447   33150 main.go:141] libmachine: Waiting for SSH to be available...
	I0719 18:33:03.178452   33150 main.go:141] libmachine: Getting to WaitForSSH function...
	I0719 18:33:03.178462   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHHostname
	I0719 18:33:03.180852   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:03.181231   33150 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:33:03.181254   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:03.181417   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHPort
	I0719 18:33:03.181594   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:33:03.181775   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:33:03.181920   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHUsername
	I0719 18:33:03.182086   33150 main.go:141] libmachine: Using SSH client type: native
	I0719 18:33:03.182287   33150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0719 18:33:03.182299   33150 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0719 18:33:03.297763   33150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 18:33:03.297788   33150 main.go:141] libmachine: Detecting the provisioner...
	I0719 18:33:03.297799   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHHostname
	I0719 18:33:03.300652   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:03.301069   33150 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:33:03.301101   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:03.301293   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHPort
	I0719 18:33:03.301511   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:33:03.301648   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:33:03.301795   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHUsername
	I0719 18:33:03.301966   33150 main.go:141] libmachine: Using SSH client type: native
	I0719 18:33:03.302173   33150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0719 18:33:03.302189   33150 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0719 18:33:03.412178   33150 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0719 18:33:03.412241   33150 main.go:141] libmachine: found compatible host: buildroot
	I0719 18:33:03.412248   33150 main.go:141] libmachine: Provisioning with buildroot...
	I0719 18:33:03.412256   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetMachineName
	I0719 18:33:03.412542   33150 buildroot.go:166] provisioning hostname "ha-269108-m03"
	I0719 18:33:03.412566   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetMachineName
	I0719 18:33:03.412728   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHHostname
	I0719 18:33:03.415456   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:03.415729   33150 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:33:03.415757   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:03.415891   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHPort
	I0719 18:33:03.416079   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:33:03.416234   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:33:03.416379   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHUsername
	I0719 18:33:03.416555   33150 main.go:141] libmachine: Using SSH client type: native
	I0719 18:33:03.416741   33150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0719 18:33:03.416759   33150 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-269108-m03 && echo "ha-269108-m03" | sudo tee /etc/hostname
	I0719 18:33:03.542883   33150 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-269108-m03
	
	I0719 18:33:03.542916   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHHostname
	I0719 18:33:03.545487   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:03.545794   33150 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:33:03.545820   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:03.545931   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHPort
	I0719 18:33:03.546125   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:33:03.546279   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:33:03.546403   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHUsername
	I0719 18:33:03.546569   33150 main.go:141] libmachine: Using SSH client type: native
	I0719 18:33:03.546758   33150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0719 18:33:03.546775   33150 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-269108-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-269108-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-269108-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 18:33:03.668542   33150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 18:33:03.668568   33150 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 18:33:03.668584   33150 buildroot.go:174] setting up certificates
	I0719 18:33:03.668594   33150 provision.go:84] configureAuth start
	I0719 18:33:03.668603   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetMachineName
	I0719 18:33:03.668874   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetIP
	I0719 18:33:03.671519   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:03.671869   33150 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:33:03.671892   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:03.672011   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHHostname
	I0719 18:33:03.674260   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:03.674643   33150 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:33:03.674668   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:03.674791   33150 provision.go:143] copyHostCerts
	I0719 18:33:03.674824   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 18:33:03.674858   33150 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 18:33:03.674869   33150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 18:33:03.674934   33150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 18:33:03.675009   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 18:33:03.675034   33150 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 18:33:03.675040   33150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 18:33:03.675064   33150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 18:33:03.675124   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 18:33:03.675139   33150 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 18:33:03.675145   33150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 18:33:03.675165   33150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 18:33:03.675228   33150 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.ha-269108-m03 san=[127.0.0.1 192.168.39.111 ha-269108-m03 localhost minikube]
	I0719 18:33:03.794983   33150 provision.go:177] copyRemoteCerts
	I0719 18:33:03.795034   33150 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 18:33:03.795056   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHHostname
	I0719 18:33:03.797426   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:03.797765   33150 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:33:03.797792   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:03.798012   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHPort
	I0719 18:33:03.798240   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:33:03.798517   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHUsername
	I0719 18:33:03.798687   33150 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m03/id_rsa Username:docker}
	I0719 18:33:03.885886   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0719 18:33:03.885965   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 18:33:03.907808   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0719 18:33:03.907891   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0719 18:33:03.929652   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0719 18:33:03.929717   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 18:33:03.951378   33150 provision.go:87] duration metric: took 282.771798ms to configureAuth
	I0719 18:33:03.951404   33150 buildroot.go:189] setting minikube options for container-runtime
	I0719 18:33:03.951637   33150 config.go:182] Loaded profile config "ha-269108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:33:03.951725   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHHostname
	I0719 18:33:03.954343   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:03.954703   33150 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:33:03.954731   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:03.954897   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHPort
	I0719 18:33:03.955082   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:33:03.955240   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:33:03.955336   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHUsername
	I0719 18:33:03.955503   33150 main.go:141] libmachine: Using SSH client type: native
	I0719 18:33:03.955700   33150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0719 18:33:03.955723   33150 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 18:33:04.226707   33150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 18:33:04.226729   33150 main.go:141] libmachine: Checking connection to Docker...
	I0719 18:33:04.226737   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetURL
	I0719 18:33:04.227969   33150 main.go:141] libmachine: (ha-269108-m03) DBG | Using libvirt version 6000000
	I0719 18:33:04.230350   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:04.230731   33150 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:33:04.230759   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:04.230938   33150 main.go:141] libmachine: Docker is up and running!
	I0719 18:33:04.230953   33150 main.go:141] libmachine: Reticulating splines...
	I0719 18:33:04.230959   33150 client.go:171] duration metric: took 25.079162097s to LocalClient.Create
	I0719 18:33:04.230978   33150 start.go:167] duration metric: took 25.079214807s to libmachine.API.Create "ha-269108"
	I0719 18:33:04.230987   33150 start.go:293] postStartSetup for "ha-269108-m03" (driver="kvm2")
	I0719 18:33:04.230996   33150 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 18:33:04.231010   33150 main.go:141] libmachine: (ha-269108-m03) Calling .DriverName
	I0719 18:33:04.231213   33150 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 18:33:04.231239   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHHostname
	I0719 18:33:04.233590   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:04.233972   33150 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:33:04.234001   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:04.234145   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHPort
	I0719 18:33:04.234310   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:33:04.234474   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHUsername
	I0719 18:33:04.234618   33150 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m03/id_rsa Username:docker}
	I0719 18:33:04.323073   33150 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 18:33:04.327041   33150 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 18:33:04.327064   33150 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 18:33:04.327123   33150 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 18:33:04.327196   33150 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 18:33:04.327205   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> /etc/ssl/certs/220282.pem
	I0719 18:33:04.327279   33150 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 18:33:04.335591   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 18:33:04.356906   33150 start.go:296] duration metric: took 125.907474ms for postStartSetup
	I0719 18:33:04.356959   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetConfigRaw
	I0719 18:33:04.357532   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetIP
	I0719 18:33:04.360358   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:04.360899   33150 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:33:04.360925   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:04.361203   33150 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/config.json ...
	I0719 18:33:04.361379   33150 start.go:128] duration metric: took 25.227789706s to createHost
	I0719 18:33:04.361401   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHHostname
	I0719 18:33:04.363499   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:04.363825   33150 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:33:04.363859   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:04.364009   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHPort
	I0719 18:33:04.364178   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:33:04.364335   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:33:04.364528   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHUsername
	I0719 18:33:04.364688   33150 main.go:141] libmachine: Using SSH client type: native
	I0719 18:33:04.364846   33150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0719 18:33:04.364855   33150 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 18:33:04.476394   33150 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721413984.456165797
	
	I0719 18:33:04.476418   33150 fix.go:216] guest clock: 1721413984.456165797
	I0719 18:33:04.476428   33150 fix.go:229] Guest: 2024-07-19 18:33:04.456165797 +0000 UTC Remote: 2024-07-19 18:33:04.361391314 +0000 UTC m=+152.974396235 (delta=94.774483ms)
	I0719 18:33:04.476449   33150 fix.go:200] guest clock delta is within tolerance: 94.774483ms
	I0719 18:33:04.476455   33150 start.go:83] releasing machines lock for "ha-269108-m03", held for 25.343006363s
	I0719 18:33:04.476480   33150 main.go:141] libmachine: (ha-269108-m03) Calling .DriverName
	I0719 18:33:04.476742   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetIP
	I0719 18:33:04.479410   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:04.479741   33150 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:33:04.479765   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:04.482154   33150 out.go:177] * Found network options:
	I0719 18:33:04.483606   33150 out.go:177]   - NO_PROXY=192.168.39.163,192.168.39.87
	W0719 18:33:04.484793   33150 proxy.go:119] fail to check proxy env: Error ip not in block
	W0719 18:33:04.484814   33150 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 18:33:04.484831   33150 main.go:141] libmachine: (ha-269108-m03) Calling .DriverName
	I0719 18:33:04.485362   33150 main.go:141] libmachine: (ha-269108-m03) Calling .DriverName
	I0719 18:33:04.485576   33150 main.go:141] libmachine: (ha-269108-m03) Calling .DriverName
	I0719 18:33:04.485679   33150 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 18:33:04.485716   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHHostname
	W0719 18:33:04.485802   33150 proxy.go:119] fail to check proxy env: Error ip not in block
	W0719 18:33:04.485823   33150 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 18:33:04.485882   33150 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 18:33:04.485903   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHHostname
	I0719 18:33:04.488175   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:04.488477   33150 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:33:04.488504   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:04.488611   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHPort
	I0719 18:33:04.488637   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:04.488783   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:33:04.488965   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHUsername
	I0719 18:33:04.489013   33150 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:33:04.489040   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:04.489122   33150 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m03/id_rsa Username:docker}
	I0719 18:33:04.489181   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHPort
	I0719 18:33:04.489310   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:33:04.489443   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHUsername
	I0719 18:33:04.489576   33150 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m03/id_rsa Username:docker}
	I0719 18:33:04.722278   33150 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 18:33:04.728160   33150 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 18:33:04.728246   33150 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 18:33:04.745040   33150 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 18:33:04.745066   33150 start.go:495] detecting cgroup driver to use...
	I0719 18:33:04.745142   33150 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 18:33:04.760830   33150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 18:33:04.774261   33150 docker.go:217] disabling cri-docker service (if available) ...
	I0719 18:33:04.774322   33150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 18:33:04.787354   33150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 18:33:04.800627   33150 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 18:33:04.905302   33150 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 18:33:05.049685   33150 docker.go:233] disabling docker service ...
	I0719 18:33:05.049741   33150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 18:33:05.064518   33150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 18:33:05.078363   33150 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 18:33:05.224145   33150 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 18:33:05.351293   33150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 18:33:05.364081   33150 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 18:33:05.381230   33150 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 18:33:05.381280   33150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:33:05.390574   33150 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 18:33:05.390640   33150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:33:05.400221   33150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:33:05.410885   33150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:33:05.421484   33150 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 18:33:05.432646   33150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:33:05.442273   33150 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:33:05.459170   33150 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:33:05.469148   33150 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 18:33:05.479572   33150 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 18:33:05.479632   33150 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 18:33:05.491280   33150 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 18:33:05.500464   33150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 18:33:05.626289   33150 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 18:33:05.762237   33150 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 18:33:05.762315   33150 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 18:33:05.766673   33150 start.go:563] Will wait 60s for crictl version
	I0719 18:33:05.766727   33150 ssh_runner.go:195] Run: which crictl
	I0719 18:33:05.770091   33150 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 18:33:05.807287   33150 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 18:33:05.807371   33150 ssh_runner.go:195] Run: crio --version
	I0719 18:33:05.833345   33150 ssh_runner.go:195] Run: crio --version
	I0719 18:33:05.867549   33150 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 18:33:05.868994   33150 out.go:177]   - env NO_PROXY=192.168.39.163
	I0719 18:33:05.870387   33150 out.go:177]   - env NO_PROXY=192.168.39.163,192.168.39.87
	I0719 18:33:05.871489   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetIP
	I0719 18:33:05.873823   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:05.874207   33150 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:33:05.874236   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:05.874426   33150 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 18:33:05.878106   33150 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 18:33:05.889796   33150 mustload.go:65] Loading cluster: ha-269108
	I0719 18:33:05.890007   33150 config.go:182] Loaded profile config "ha-269108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:33:05.890259   33150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:33:05.890295   33150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:33:05.905200   33150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42299
	I0719 18:33:05.905604   33150 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:33:05.906025   33150 main.go:141] libmachine: Using API Version  1
	I0719 18:33:05.906050   33150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:33:05.906309   33150 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:33:05.906525   33150 main.go:141] libmachine: (ha-269108) Calling .GetState
	I0719 18:33:05.907985   33150 host.go:66] Checking if "ha-269108" exists ...
	I0719 18:33:05.908303   33150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:33:05.908341   33150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:33:05.922260   33150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41839
	I0719 18:33:05.922599   33150 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:33:05.923041   33150 main.go:141] libmachine: Using API Version  1
	I0719 18:33:05.923059   33150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:33:05.923332   33150 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:33:05.923510   33150 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:33:05.923633   33150 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108 for IP: 192.168.39.111
	I0719 18:33:05.923644   33150 certs.go:194] generating shared ca certs ...
	I0719 18:33:05.923662   33150 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:33:05.923794   33150 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 18:33:05.923864   33150 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 18:33:05.923878   33150 certs.go:256] generating profile certs ...
	I0719 18:33:05.923963   33150 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/client.key
	I0719 18:33:05.923993   33150 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key.d20dd27e
	I0719 18:33:05.924014   33150 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt.d20dd27e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.163 192.168.39.87 192.168.39.111 192.168.39.254]
	I0719 18:33:06.025750   33150 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt.d20dd27e ...
	I0719 18:33:06.025786   33150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt.d20dd27e: {Name:mk51ba19514db8b85ef9b3173ef1e66f87760fef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:33:06.025973   33150 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key.d20dd27e ...
	I0719 18:33:06.025990   33150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key.d20dd27e: {Name:mkc4cce785486f459321a7d862eda10c6eda9616 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:33:06.026088   33150 certs.go:381] copying /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt.d20dd27e -> /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt
	I0719 18:33:06.026237   33150 certs.go:385] copying /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key.d20dd27e -> /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key
	I0719 18:33:06.026403   33150 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.key
	I0719 18:33:06.026419   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 18:33:06.026437   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0719 18:33:06.026453   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 18:33:06.026471   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 18:33:06.026488   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 18:33:06.026506   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 18:33:06.026523   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 18:33:06.026547   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 18:33:06.026622   33150 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 18:33:06.026665   33150 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 18:33:06.026678   33150 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 18:33:06.026711   33150 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 18:33:06.026741   33150 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 18:33:06.026770   33150 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 18:33:06.026828   33150 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 18:33:06.026864   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> /usr/share/ca-certificates/220282.pem
	I0719 18:33:06.026883   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 18:33:06.026900   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem -> /usr/share/ca-certificates/22028.pem
	I0719 18:33:06.026938   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:33:06.029911   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:33:06.030330   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:33:06.030350   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:33:06.030598   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:33:06.030793   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:33:06.030952   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:33:06.031142   33150 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	I0719 18:33:06.104148   33150 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0719 18:33:06.109724   33150 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0719 18:33:06.121831   33150 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0719 18:33:06.126025   33150 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0719 18:33:06.138562   33150 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0719 18:33:06.143389   33150 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0719 18:33:06.154473   33150 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0719 18:33:06.159023   33150 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0719 18:33:06.171644   33150 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0719 18:33:06.175264   33150 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0719 18:33:06.184464   33150 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0719 18:33:06.188359   33150 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0719 18:33:06.198566   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 18:33:06.221966   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 18:33:06.243388   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 18:33:06.265285   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 18:33:06.289054   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0719 18:33:06.312452   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 18:33:06.336995   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 18:33:06.361263   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 18:33:06.384792   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 18:33:06.407244   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 18:33:06.430957   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 18:33:06.455042   33150 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0719 18:33:06.471039   33150 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0719 18:33:06.487363   33150 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0719 18:33:06.508221   33150 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0719 18:33:06.522920   33150 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0719 18:33:06.538405   33150 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0719 18:33:06.553152   33150 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0719 18:33:06.569197   33150 ssh_runner.go:195] Run: openssl version
	I0719 18:33:06.574663   33150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 18:33:06.584356   33150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 18:33:06.588528   33150 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 18:33:06.588575   33150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 18:33:06.594073   33150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 18:33:06.603847   33150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 18:33:06.613253   33150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 18:33:06.617081   33150 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 18:33:06.617131   33150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 18:33:06.622068   33150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 18:33:06.631532   33150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 18:33:06.640869   33150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 18:33:06.644888   33150 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 18:33:06.644941   33150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 18:33:06.650230   33150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 18:33:06.661150   33150 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 18:33:06.664801   33150 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 18:33:06.664856   33150 kubeadm.go:934] updating node {m03 192.168.39.111 8443 v1.30.3 crio true true} ...
	I0719 18:33:06.664953   33150 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-269108-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-269108 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 18:33:06.664983   33150 kube-vip.go:115] generating kube-vip config ...
	I0719 18:33:06.665020   33150 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0719 18:33:06.679864   33150 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0719 18:33:06.679914   33150 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0719 18:33:06.679958   33150 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 18:33:06.690356   33150 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0719 18:33:06.690402   33150 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0719 18:33:06.700762   33150 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0719 18:33:06.700781   33150 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0719 18:33:06.700800   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0719 18:33:06.700811   33150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 18:33:06.700855   33150 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0719 18:33:06.700764   33150 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0719 18:33:06.700894   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0719 18:33:06.700964   33150 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0719 18:33:06.717579   33150 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0719 18:33:06.717608   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0719 18:33:06.717612   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0719 18:33:06.717656   33150 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0719 18:33:06.717678   33150 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0719 18:33:06.717681   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0719 18:33:06.752029   33150 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0719 18:33:06.752070   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0719 18:33:07.586359   33150 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0719 18:33:07.595678   33150 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0719 18:33:07.611590   33150 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 18:33:07.626485   33150 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0719 18:33:07.641336   33150 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0719 18:33:07.645172   33150 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 18:33:07.656486   33150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 18:33:07.768930   33150 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 18:33:07.786944   33150 host.go:66] Checking if "ha-269108" exists ...
	I0719 18:33:07.787499   33150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:33:07.787553   33150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:33:07.803903   33150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40453
	I0719 18:33:07.804403   33150 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:33:07.804935   33150 main.go:141] libmachine: Using API Version  1
	I0719 18:33:07.804959   33150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:33:07.805322   33150 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:33:07.805505   33150 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:33:07.805656   33150 start.go:317] joinCluster: &{Name:ha-269108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-269108 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.87 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 18:33:07.805772   33150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0719 18:33:07.805794   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:33:07.809326   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:33:07.809800   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:33:07.809827   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:33:07.810000   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:33:07.810184   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:33:07.810390   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:33:07.810542   33150 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	I0719 18:33:07.954991   33150 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 18:33:07.955046   33150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qu2999.8ci5v73iktlw95nz --discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-269108-m03 --control-plane --apiserver-advertise-address=192.168.39.111 --apiserver-bind-port=8443"
	I0719 18:33:32.414858   33150 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qu2999.8ci5v73iktlw95nz --discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-269108-m03 --control-plane --apiserver-advertise-address=192.168.39.111 --apiserver-bind-port=8443": (24.45978558s)
	I0719 18:33:32.414893   33150 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0719 18:33:32.915184   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-269108-m03 minikube.k8s.io/updated_at=2024_07_19T18_33_32_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221 minikube.k8s.io/name=ha-269108 minikube.k8s.io/primary=false
	I0719 18:33:33.044435   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-269108-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0719 18:33:33.172662   33150 start.go:319] duration metric: took 25.367000212s to joinCluster
	I0719 18:33:33.172744   33150 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 18:33:33.173085   33150 config.go:182] Loaded profile config "ha-269108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:33:33.174347   33150 out.go:177] * Verifying Kubernetes components...
	I0719 18:33:33.175666   33150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 18:33:33.367744   33150 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 18:33:33.384358   33150 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 18:33:33.384642   33150 kapi.go:59] client config for ha-269108: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/client.crt", KeyFile:"/home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/client.key", CAFile:"/home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0719 18:33:33.384704   33150 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.163:8443
	I0719 18:33:33.384872   33150 node_ready.go:35] waiting up to 6m0s for node "ha-269108-m03" to be "Ready" ...
	I0719 18:33:33.384931   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:33.384937   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:33.384944   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:33.384949   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:33.388619   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:33.885896   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:33.885922   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:33.885934   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:33.885940   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:33.889364   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:34.385648   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:34.385668   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:34.385676   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:34.385680   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:34.389540   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:34.886044   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:34.886068   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:34.886083   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:34.886090   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:34.889126   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:35.385120   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:35.385145   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:35.385157   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:35.385162   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:35.387912   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:35.388778   33150 node_ready.go:53] node "ha-269108-m03" has status "Ready":"False"
	I0719 18:33:35.885030   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:35.885049   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:35.885059   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:35.885065   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:35.888518   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:36.385310   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:36.385337   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:36.385348   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:36.385354   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:36.388519   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:36.885373   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:36.885396   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:36.885406   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:36.885412   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:36.888405   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:37.385233   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:37.385259   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:37.385267   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:37.385271   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:37.389256   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:37.389944   33150 node_ready.go:53] node "ha-269108-m03" has status "Ready":"False"
	I0719 18:33:37.886013   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:37.886039   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:37.886050   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:37.886057   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:37.891072   33150 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 18:33:38.386007   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:38.386030   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:38.386041   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:38.386047   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:38.389299   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:38.885238   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:38.885256   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:38.885265   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:38.885270   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:38.902954   33150 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0719 18:33:39.385197   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:39.385217   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:39.385226   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:39.385229   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:39.388731   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:39.885205   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:39.885228   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:39.885236   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:39.885241   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:39.888329   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:39.889154   33150 node_ready.go:53] node "ha-269108-m03" has status "Ready":"False"
	I0719 18:33:40.385081   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:40.385102   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:40.385110   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:40.385113   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:40.388090   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:40.885692   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:40.885718   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:40.885727   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:40.885733   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:40.888685   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:41.385489   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:41.385508   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:41.385515   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:41.385518   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:41.389220   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:41.885275   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:41.885297   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:41.885304   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:41.885308   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:41.888469   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:42.385306   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:42.385328   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:42.385337   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:42.385340   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:42.388266   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:42.389088   33150 node_ready.go:53] node "ha-269108-m03" has status "Ready":"False"
	I0719 18:33:42.885092   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:42.885126   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:42.885133   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:42.885138   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:42.888294   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:43.385303   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:43.385345   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:43.385358   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:43.385363   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:43.388120   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:43.885838   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:43.885859   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:43.885869   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:43.885875   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:43.888940   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:44.385989   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:44.386011   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:44.386022   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:44.386026   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:44.389337   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:44.389897   33150 node_ready.go:53] node "ha-269108-m03" has status "Ready":"False"
	I0719 18:33:44.885118   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:44.885136   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:44.885144   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:44.885147   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:44.887989   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:45.385963   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:45.385993   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:45.386004   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:45.386013   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:45.389083   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:45.886036   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:45.886057   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:45.886065   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:45.886070   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:45.889176   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:46.385753   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:46.385773   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:46.385781   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:46.385786   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:46.389028   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:46.885870   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:46.885909   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:46.885919   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:46.885927   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:46.888961   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:46.889579   33150 node_ready.go:53] node "ha-269108-m03" has status "Ready":"False"
	I0719 18:33:47.386039   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:47.386063   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:47.386076   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:47.386082   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:47.388953   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:47.885558   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:47.885588   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:47.885598   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:47.885604   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:47.889107   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:48.385517   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:48.385538   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:48.385547   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:48.385553   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:48.388811   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:48.885662   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:48.885682   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:48.885694   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:48.885701   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:48.888700   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:49.385254   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:49.385273   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:49.385281   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:49.385288   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:49.388281   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:49.388942   33150 node_ready.go:53] node "ha-269108-m03" has status "Ready":"False"
	I0719 18:33:49.885335   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:49.885355   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:49.885364   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:49.885368   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:49.888183   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:50.385196   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:50.385229   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:50.385240   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:50.385246   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:50.388735   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:50.885864   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:50.885896   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:50.885905   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:50.885908   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:50.890726   33150 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 18:33:50.891652   33150 node_ready.go:49] node "ha-269108-m03" has status "Ready":"True"
	I0719 18:33:50.891677   33150 node_ready.go:38] duration metric: took 17.506791833s for node "ha-269108-m03" to be "Ready" ...
	I0719 18:33:50.891689   33150 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 18:33:50.891762   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods
	I0719 18:33:50.891780   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:50.891790   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:50.891797   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:50.898298   33150 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 18:33:50.904298   33150 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-frb69" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:50.904362   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-frb69
	I0719 18:33:50.904370   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:50.904377   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:50.904380   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:50.906974   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:50.907762   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:33:50.907781   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:50.907792   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:50.907798   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:50.910343   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:50.910922   33150 pod_ready.go:92] pod "coredns-7db6d8ff4d-frb69" in "kube-system" namespace has status "Ready":"True"
	I0719 18:33:50.910936   33150 pod_ready.go:81] duration metric: took 6.618815ms for pod "coredns-7db6d8ff4d-frb69" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:50.910945   33150 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zxh8h" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:50.910992   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-zxh8h
	I0719 18:33:50.910999   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:50.911006   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:50.911012   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:50.913213   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:50.913788   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:33:50.913805   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:50.913815   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:50.913822   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:50.915769   33150 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 18:33:50.916429   33150 pod_ready.go:92] pod "coredns-7db6d8ff4d-zxh8h" in "kube-system" namespace has status "Ready":"True"
	I0719 18:33:50.916445   33150 pod_ready.go:81] duration metric: took 5.492683ms for pod "coredns-7db6d8ff4d-zxh8h" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:50.916452   33150 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-269108" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:50.916494   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/etcd-ha-269108
	I0719 18:33:50.916500   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:50.916507   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:50.916512   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:50.919032   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:50.919555   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:33:50.919571   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:50.919581   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:50.919588   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:50.921939   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:50.922568   33150 pod_ready.go:92] pod "etcd-ha-269108" in "kube-system" namespace has status "Ready":"True"
	I0719 18:33:50.922582   33150 pod_ready.go:81] duration metric: took 6.124343ms for pod "etcd-ha-269108" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:50.922590   33150 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-269108-m02" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:50.922634   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/etcd-ha-269108-m02
	I0719 18:33:50.922642   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:50.922649   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:50.922657   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:50.925078   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:50.926211   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:33:50.926225   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:50.926236   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:50.926243   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:50.928592   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:50.929127   33150 pod_ready.go:92] pod "etcd-ha-269108-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 18:33:50.929142   33150 pod_ready.go:81] duration metric: took 6.539503ms for pod "etcd-ha-269108-m02" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:50.929151   33150 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-269108-m03" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:51.086528   33150 request.go:629] Waited for 157.314288ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/etcd-ha-269108-m03
	I0719 18:33:51.086590   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/etcd-ha-269108-m03
	I0719 18:33:51.086595   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:51.086602   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:51.086606   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:51.089686   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:51.286732   33150 request.go:629] Waited for 196.367056ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:51.286787   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:51.286792   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:51.286809   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:51.286813   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:51.289916   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:51.290565   33150 pod_ready.go:92] pod "etcd-ha-269108-m03" in "kube-system" namespace has status "Ready":"True"
	I0719 18:33:51.290585   33150 pod_ready.go:81] duration metric: took 361.427312ms for pod "etcd-ha-269108-m03" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:51.290605   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-269108" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:51.486938   33150 request.go:629] Waited for 196.242049ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-269108
	I0719 18:33:51.487014   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-269108
	I0719 18:33:51.487019   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:51.487026   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:51.487031   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:51.490314   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:51.686276   33150 request.go:629] Waited for 195.330296ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:33:51.686334   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:33:51.686341   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:51.686350   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:51.686360   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:51.689202   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:51.689805   33150 pod_ready.go:92] pod "kube-apiserver-ha-269108" in "kube-system" namespace has status "Ready":"True"
	I0719 18:33:51.689825   33150 pod_ready.go:81] duration metric: took 399.209615ms for pod "kube-apiserver-ha-269108" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:51.689834   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-269108-m02" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:51.886890   33150 request.go:629] Waited for 196.990678ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-269108-m02
	I0719 18:33:51.886959   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-269108-m02
	I0719 18:33:51.886965   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:51.886973   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:51.886977   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:51.890150   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:52.086074   33150 request.go:629] Waited for 195.295195ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:33:52.086139   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:33:52.086144   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:52.086151   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:52.086157   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:52.089406   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:52.089845   33150 pod_ready.go:92] pod "kube-apiserver-ha-269108-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 18:33:52.089860   33150 pod_ready.go:81] duration metric: took 400.019512ms for pod "kube-apiserver-ha-269108-m02" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:52.089872   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-269108-m03" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:52.285881   33150 request.go:629] Waited for 195.938468ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-269108-m03
	I0719 18:33:52.285952   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-269108-m03
	I0719 18:33:52.285960   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:52.285971   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:52.285977   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:52.289331   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:52.486616   33150 request.go:629] Waited for 196.390046ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:52.486688   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:52.486694   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:52.486706   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:52.486713   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:52.489835   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:52.490460   33150 pod_ready.go:92] pod "kube-apiserver-ha-269108-m03" in "kube-system" namespace has status "Ready":"True"
	I0719 18:33:52.490481   33150 pod_ready.go:81] duration metric: took 400.601648ms for pod "kube-apiserver-ha-269108-m03" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:52.490496   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-269108" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:52.686880   33150 request.go:629] Waited for 196.317164ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-269108
	I0719 18:33:52.686945   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-269108
	I0719 18:33:52.686954   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:52.686988   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:52.686999   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:52.690489   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:52.886570   33150 request.go:629] Waited for 195.328532ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:33:52.886623   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:33:52.886633   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:52.886640   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:52.886644   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:52.889770   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:52.890296   33150 pod_ready.go:92] pod "kube-controller-manager-ha-269108" in "kube-system" namespace has status "Ready":"True"
	I0719 18:33:52.890313   33150 pod_ready.go:81] duration metric: took 399.809414ms for pod "kube-controller-manager-ha-269108" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:52.890323   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-269108-m02" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:53.086423   33150 request.go:629] Waited for 196.028268ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-269108-m02
	I0719 18:33:53.086492   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-269108-m02
	I0719 18:33:53.086500   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:53.086513   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:53.086522   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:53.089646   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:53.286712   33150 request.go:629] Waited for 196.358979ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:33:53.286790   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:33:53.286798   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:53.286808   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:53.286816   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:53.290404   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:53.291379   33150 pod_ready.go:92] pod "kube-controller-manager-ha-269108-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 18:33:53.291402   33150 pod_ready.go:81] duration metric: took 401.073104ms for pod "kube-controller-manager-ha-269108-m02" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:53.291411   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-269108-m03" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:53.486727   33150 request.go:629] Waited for 195.2611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-269108-m03
	I0719 18:33:53.486801   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-269108-m03
	I0719 18:33:53.486809   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:53.486819   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:53.486828   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:53.489707   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:53.686861   33150 request.go:629] Waited for 196.357422ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:53.686908   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:53.686913   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:53.686919   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:53.686923   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:53.689982   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:53.690888   33150 pod_ready.go:92] pod "kube-controller-manager-ha-269108-m03" in "kube-system" namespace has status "Ready":"True"
	I0719 18:33:53.690908   33150 pod_ready.go:81] duration metric: took 399.490475ms for pod "kube-controller-manager-ha-269108-m03" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:53.690918   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9zpp2" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:53.886841   33150 request.go:629] Waited for 195.858187ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9zpp2
	I0719 18:33:53.886910   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9zpp2
	I0719 18:33:53.886915   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:53.886923   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:53.886931   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:53.889909   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:54.085887   33150 request.go:629] Waited for 195.273343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:54.085936   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:54.085941   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:54.085950   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:54.085956   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:54.089208   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:54.089940   33150 pod_ready.go:92] pod "kube-proxy-9zpp2" in "kube-system" namespace has status "Ready":"True"
	I0719 18:33:54.089957   33150 pod_ready.go:81] duration metric: took 399.033ms for pod "kube-proxy-9zpp2" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:54.089967   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pkjns" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:54.286451   33150 request.go:629] Waited for 196.406428ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pkjns
	I0719 18:33:54.286512   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pkjns
	I0719 18:33:54.286518   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:54.286525   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:54.286530   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:54.290305   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:54.486310   33150 request.go:629] Waited for 195.343805ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:33:54.486363   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:33:54.486368   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:54.486382   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:54.486389   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:54.489415   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:54.490207   33150 pod_ready.go:92] pod "kube-proxy-pkjns" in "kube-system" namespace has status "Ready":"True"
	I0719 18:33:54.490227   33150 pod_ready.go:81] duration metric: took 400.253784ms for pod "kube-proxy-pkjns" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:54.490236   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vr7bz" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:54.686250   33150 request.go:629] Waited for 195.947003ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vr7bz
	I0719 18:33:54.686316   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vr7bz
	I0719 18:33:54.686321   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:54.686334   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:54.686339   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:54.689714   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:54.886045   33150 request.go:629] Waited for 195.278872ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:33:54.886097   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:33:54.886102   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:54.886110   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:54.886114   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:54.889463   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:54.890278   33150 pod_ready.go:92] pod "kube-proxy-vr7bz" in "kube-system" namespace has status "Ready":"True"
	I0719 18:33:54.890299   33150 pod_ready.go:81] duration metric: took 400.056328ms for pod "kube-proxy-vr7bz" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:54.890312   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-269108" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:55.086378   33150 request.go:629] Waited for 196.00321ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-269108
	I0719 18:33:55.086434   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-269108
	I0719 18:33:55.086441   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:55.086451   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:55.086457   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:55.089184   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:55.285942   33150 request.go:629] Waited for 196.156276ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:33:55.286006   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:33:55.286013   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:55.286023   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:55.286028   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:55.289161   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:55.289806   33150 pod_ready.go:92] pod "kube-scheduler-ha-269108" in "kube-system" namespace has status "Ready":"True"
	I0719 18:33:55.289824   33150 pod_ready.go:81] duration metric: took 399.504873ms for pod "kube-scheduler-ha-269108" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:55.289833   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-269108-m02" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:55.485994   33150 request.go:629] Waited for 196.089371ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-269108-m02
	I0719 18:33:55.486077   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-269108-m02
	I0719 18:33:55.486083   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:55.486093   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:55.486098   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:55.489047   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:55.685881   33150 request.go:629] Waited for 196.165669ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:33:55.685947   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:33:55.685954   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:55.685965   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:55.685973   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:55.689080   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:55.689711   33150 pod_ready.go:92] pod "kube-scheduler-ha-269108-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 18:33:55.689727   33150 pod_ready.go:81] duration metric: took 399.88846ms for pod "kube-scheduler-ha-269108-m02" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:55.689736   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-269108-m03" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:55.886831   33150 request.go:629] Waited for 197.025983ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-269108-m03
	I0719 18:33:55.886910   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-269108-m03
	I0719 18:33:55.886917   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:55.886927   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:55.886941   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:55.890802   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:56.086044   33150 request.go:629] Waited for 194.690243ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:56.086119   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:56.086128   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:56.086136   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:56.086143   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:56.088917   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:56.089462   33150 pod_ready.go:92] pod "kube-scheduler-ha-269108-m03" in "kube-system" namespace has status "Ready":"True"
	I0719 18:33:56.089482   33150 pod_ready.go:81] duration metric: took 399.73818ms for pod "kube-scheduler-ha-269108-m03" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:56.089497   33150 pod_ready.go:38] duration metric: took 5.197794662s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 18:33:56.089514   33150 api_server.go:52] waiting for apiserver process to appear ...
	I0719 18:33:56.089573   33150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 18:33:56.106014   33150 api_server.go:72] duration metric: took 22.933236564s to wait for apiserver process to appear ...
	I0719 18:33:56.106029   33150 api_server.go:88] waiting for apiserver healthz status ...
	I0719 18:33:56.106043   33150 api_server.go:253] Checking apiserver healthz at https://192.168.39.163:8443/healthz ...
	I0719 18:33:56.110067   33150 api_server.go:279] https://192.168.39.163:8443/healthz returned 200:
	ok
	I0719 18:33:56.110133   33150 round_trippers.go:463] GET https://192.168.39.163:8443/version
	I0719 18:33:56.110143   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:56.110159   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:56.110170   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:56.110998   33150 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0719 18:33:56.111087   33150 api_server.go:141] control plane version: v1.30.3
	I0719 18:33:56.111104   33150 api_server.go:131] duration metric: took 5.067921ms to wait for apiserver health ...
	I0719 18:33:56.111112   33150 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 18:33:56.286511   33150 request.go:629] Waited for 175.332478ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods
	I0719 18:33:56.286581   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods
	I0719 18:33:56.286586   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:56.286594   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:56.286598   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:56.293303   33150 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 18:33:56.300529   33150 system_pods.go:59] 24 kube-system pods found
	I0719 18:33:56.300561   33150 system_pods.go:61] "coredns-7db6d8ff4d-frb69" [bd2dc6fa-ab54-44e1-b3b1-3e63af76140b] Running
	I0719 18:33:56.300566   33150 system_pods.go:61] "coredns-7db6d8ff4d-zxh8h" [1016c72c-a0e0-41fe-8f64-1ba03d7ce1c8] Running
	I0719 18:33:56.300570   33150 system_pods.go:61] "etcd-ha-269108" [dc807567-f8b9-44c5-abda-6dd170f77bf2] Running
	I0719 18:33:56.300573   33150 system_pods.go:61] "etcd-ha-269108-m02" [5f17ee06-e4d1-4e51-a8de-80bf7d5e644b] Running
	I0719 18:33:56.300577   33150 system_pods.go:61] "etcd-ha-269108-m03" [03383126-0715-4c44-8c71-58f6a3ad717f] Running
	I0719 18:33:56.300582   33150 system_pods.go:61] "kindnet-4lhkr" [92211e52-9a67-4b01-a4af-0769596d7103] Running
	I0719 18:33:56.300585   33150 system_pods.go:61] "kindnet-bmcvm" [41d2a7b7-173e-402e-9323-f6efed16ce26] Running
	I0719 18:33:56.300587   33150 system_pods.go:61] "kindnet-fs5wk" [f5461a69-fd24-47dd-8376-f8492c8dad0b] Running
	I0719 18:33:56.300591   33150 system_pods.go:61] "kube-apiserver-ha-269108" [99659542-3ba1-4035-83a1-8729586d2cc0] Running
	I0719 18:33:56.300594   33150 system_pods.go:61] "kube-apiserver-ha-269108-m02" [2637cd4e-b5b5-4d1d-9913-c5cd000b9d48] Running
	I0719 18:33:56.300597   33150 system_pods.go:61] "kube-apiserver-ha-269108-m03" [409c3247-2b63-44b3-8563-3be63e699ba8] Running
	I0719 18:33:56.300600   33150 system_pods.go:61] "kube-controller-manager-ha-269108" [def21bee-d990-48ec-88d3-5f6b8aa1f7c9] Running
	I0719 18:33:56.300604   33150 system_pods.go:61] "kube-controller-manager-ha-269108-m02" [e4074996-8a87-4788-af15-172f1ac7fe67] Running
	I0719 18:33:56.300607   33150 system_pods.go:61] "kube-controller-manager-ha-269108-m03" [aa4307d2-ee77-4c87-8a9c-668e56ea9ca5] Running
	I0719 18:33:56.300610   33150 system_pods.go:61] "kube-proxy-9zpp2" [395be6bf-b8bd-4848-8550-cbc2e647c238] Running
	I0719 18:33:56.300613   33150 system_pods.go:61] "kube-proxy-pkjns" [f97d4b6b-eb80-46c3-9513-ef6e9af5318e] Running
	I0719 18:33:56.300616   33150 system_pods.go:61] "kube-proxy-vr7bz" [cae5da2f-2ee1-4b82-8471-479758fbab4f] Running
	I0719 18:33:56.300620   33150 system_pods.go:61] "kube-scheduler-ha-269108" [07b87e93-0008-46b6-90f2-3e933bb568b0] Running
	I0719 18:33:56.300624   33150 system_pods.go:61] "kube-scheduler-ha-269108-m02" [1dbd98dd-a9b7-464e-9380-d15ee99ab168] Running
	I0719 18:33:56.300626   33150 system_pods.go:61] "kube-scheduler-ha-269108-m03" [a6501f5b-579b-4940-bbcb-1ed60b08a0ef] Running
	I0719 18:33:56.300629   33150 system_pods.go:61] "kube-vip-ha-269108" [a72f4143-401a-41a3-8adc-4cfc75369797] Running
	I0719 18:33:56.300636   33150 system_pods.go:61] "kube-vip-ha-269108-m02" [5e41e24e-112e-420e-b359-12a6540e9e34] Running
	I0719 18:33:56.300641   33150 system_pods.go:61] "kube-vip-ha-269108-m03" [c6a6f347-4ae3-4401-b654-895809bfcce0] Running
	I0719 18:33:56.300646   33150 system_pods.go:61] "storage-provisioner" [38aaccd6-4fb9-4744-b502-261ebeaf476a] Running
	I0719 18:33:56.300651   33150 system_pods.go:74] duration metric: took 189.533061ms to wait for pod list to return data ...
	I0719 18:33:56.300660   33150 default_sa.go:34] waiting for default service account to be created ...
	I0719 18:33:56.486185   33150 request.go:629] Waited for 185.465008ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/default/serviceaccounts
	I0719 18:33:56.486235   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/default/serviceaccounts
	I0719 18:33:56.486240   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:56.486247   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:56.486251   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:56.489296   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:56.489399   33150 default_sa.go:45] found service account: "default"
	I0719 18:33:56.489412   33150 default_sa.go:55] duration metric: took 188.746699ms for default service account to be created ...
	I0719 18:33:56.489419   33150 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 18:33:56.686064   33150 request.go:629] Waited for 196.570052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods
	I0719 18:33:56.686128   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods
	I0719 18:33:56.686135   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:56.686146   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:56.686152   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:56.692563   33150 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 18:33:56.698840   33150 system_pods.go:86] 24 kube-system pods found
	I0719 18:33:56.698865   33150 system_pods.go:89] "coredns-7db6d8ff4d-frb69" [bd2dc6fa-ab54-44e1-b3b1-3e63af76140b] Running
	I0719 18:33:56.698872   33150 system_pods.go:89] "coredns-7db6d8ff4d-zxh8h" [1016c72c-a0e0-41fe-8f64-1ba03d7ce1c8] Running
	I0719 18:33:56.698876   33150 system_pods.go:89] "etcd-ha-269108" [dc807567-f8b9-44c5-abda-6dd170f77bf2] Running
	I0719 18:33:56.698880   33150 system_pods.go:89] "etcd-ha-269108-m02" [5f17ee06-e4d1-4e51-a8de-80bf7d5e644b] Running
	I0719 18:33:56.698884   33150 system_pods.go:89] "etcd-ha-269108-m03" [03383126-0715-4c44-8c71-58f6a3ad717f] Running
	I0719 18:33:56.698889   33150 system_pods.go:89] "kindnet-4lhkr" [92211e52-9a67-4b01-a4af-0769596d7103] Running
	I0719 18:33:56.698894   33150 system_pods.go:89] "kindnet-bmcvm" [41d2a7b7-173e-402e-9323-f6efed16ce26] Running
	I0719 18:33:56.698901   33150 system_pods.go:89] "kindnet-fs5wk" [f5461a69-fd24-47dd-8376-f8492c8dad0b] Running
	I0719 18:33:56.698909   33150 system_pods.go:89] "kube-apiserver-ha-269108" [99659542-3ba1-4035-83a1-8729586d2cc0] Running
	I0719 18:33:56.698915   33150 system_pods.go:89] "kube-apiserver-ha-269108-m02" [2637cd4e-b5b5-4d1d-9913-c5cd000b9d48] Running
	I0719 18:33:56.698924   33150 system_pods.go:89] "kube-apiserver-ha-269108-m03" [409c3247-2b63-44b3-8563-3be63e699ba8] Running
	I0719 18:33:56.698929   33150 system_pods.go:89] "kube-controller-manager-ha-269108" [def21bee-d990-48ec-88d3-5f6b8aa1f7c9] Running
	I0719 18:33:56.698936   33150 system_pods.go:89] "kube-controller-manager-ha-269108-m02" [e4074996-8a87-4788-af15-172f1ac7fe67] Running
	I0719 18:33:56.698944   33150 system_pods.go:89] "kube-controller-manager-ha-269108-m03" [aa4307d2-ee77-4c87-8a9c-668e56ea9ca5] Running
	I0719 18:33:56.698948   33150 system_pods.go:89] "kube-proxy-9zpp2" [395be6bf-b8bd-4848-8550-cbc2e647c238] Running
	I0719 18:33:56.698955   33150 system_pods.go:89] "kube-proxy-pkjns" [f97d4b6b-eb80-46c3-9513-ef6e9af5318e] Running
	I0719 18:33:56.698959   33150 system_pods.go:89] "kube-proxy-vr7bz" [cae5da2f-2ee1-4b82-8471-479758fbab4f] Running
	I0719 18:33:56.698965   33150 system_pods.go:89] "kube-scheduler-ha-269108" [07b87e93-0008-46b6-90f2-3e933bb568b0] Running
	I0719 18:33:56.698969   33150 system_pods.go:89] "kube-scheduler-ha-269108-m02" [1dbd98dd-a9b7-464e-9380-d15ee99ab168] Running
	I0719 18:33:56.698975   33150 system_pods.go:89] "kube-scheduler-ha-269108-m03" [a6501f5b-579b-4940-bbcb-1ed60b08a0ef] Running
	I0719 18:33:56.698979   33150 system_pods.go:89] "kube-vip-ha-269108" [a72f4143-401a-41a3-8adc-4cfc75369797] Running
	I0719 18:33:56.698985   33150 system_pods.go:89] "kube-vip-ha-269108-m02" [5e41e24e-112e-420e-b359-12a6540e9e34] Running
	I0719 18:33:56.698988   33150 system_pods.go:89] "kube-vip-ha-269108-m03" [c6a6f347-4ae3-4401-b654-895809bfcce0] Running
	I0719 18:33:56.698994   33150 system_pods.go:89] "storage-provisioner" [38aaccd6-4fb9-4744-b502-261ebeaf476a] Running
	I0719 18:33:56.699001   33150 system_pods.go:126] duration metric: took 209.577876ms to wait for k8s-apps to be running ...
	I0719 18:33:56.699013   33150 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 18:33:56.699064   33150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 18:33:56.714749   33150 system_svc.go:56] duration metric: took 15.731122ms WaitForService to wait for kubelet
	I0719 18:33:56.714778   33150 kubeadm.go:582] duration metric: took 23.542000656s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 18:33:56.714800   33150 node_conditions.go:102] verifying NodePressure condition ...
	I0719 18:33:56.886307   33150 request.go:629] Waited for 171.436423ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes
	I0719 18:33:56.886367   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes
	I0719 18:33:56.886375   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:56.886385   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:56.886392   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:56.890097   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:56.891049   33150 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 18:33:56.891068   33150 node_conditions.go:123] node cpu capacity is 2
	I0719 18:33:56.891080   33150 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 18:33:56.891089   33150 node_conditions.go:123] node cpu capacity is 2
	I0719 18:33:56.891095   33150 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 18:33:56.891102   33150 node_conditions.go:123] node cpu capacity is 2
	I0719 18:33:56.891110   33150 node_conditions.go:105] duration metric: took 176.304067ms to run NodePressure ...
	I0719 18:33:56.891128   33150 start.go:241] waiting for startup goroutines ...
	I0719 18:33:56.891157   33150 start.go:255] writing updated cluster config ...
	I0719 18:33:56.891475   33150 ssh_runner.go:195] Run: rm -f paused
	I0719 18:33:56.942189   33150 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 18:33:56.944116   33150 out.go:177] * Done! kubectl is now configured to use "ha-269108" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 19 18:37:35 ha-269108 crio[681]: time="2024-07-19 18:37:35.021012541Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721414255020985886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d0f143af-5a18-4881-b6f2-c6ee905550b8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 18:37:35 ha-269108 crio[681]: time="2024-07-19 18:37:35.021708213Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a4c47df-a59d-45fb-873e-6b93f5eb7097 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:37:35 ha-269108 crio[681]: time="2024-07-19 18:37:35.021789120Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a4c47df-a59d-45fb-873e-6b93f5eb7097 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:37:35 ha-269108 crio[681]: time="2024-07-19 18:37:35.022026960Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7246971f05346f2dc79fb64d87f9e45c4dbe98d10104f0ef433c848235f3b80,PodSandboxId:8ce5a6a0852e27942f0fc5b9171cce34e14a87ce419545c3842e9353ab6324b8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721414041026619137,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8qqtr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c0699d4b-e052-40cf-b41c-ab604d64531c,},Annotations:map[string]string{io.kubernetes.container.hash: 5c041215,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a98e1c0fea9d11195dfc6c8ea7f24783d955e7e8b78f7ab0613caaaec2fdb2ec,PodSandboxId:6a81dda96a2e125ee2baa5c574c3f65c8ad4e3da7e8fb629549d2cae5fffeacb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721413903514218061,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38aaccd6-4fb9-4744-b502-261ebeaf476a,},Annotations:map[string]string{io.kubernetes.container.hash: 8461d301,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:160fdbf709fdfd0919bdc4cac02da0844e0b17c6e6272326ed4de2e8c824f16a,PodSandboxId:7a1d288c9d262c0228c5bc042b7dcf56aee7adfe173e152f2e29c3f98ad72afa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721413903448828320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-frb69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd2dc6fa-ab54-44e1-b3b1-3e63af76140b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a98894c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e3cdc51f7ed094ed5462f11f5cff75cb3d6d30639d4cb6e22c13a07f94a6433,PodSandboxId:3932d100e152d2ea42e6eb0df146808e82dc875697eb60499c0321a97ca99df2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721413903464875569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zxh8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1016c72c-a0
e0-41fe-8f64-1ba03d7ce1c8,},Annotations:map[string]string{io.kubernetes.container.hash: 7619a7d5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b0808457c5c75d11706b6ffd7814fd4e46226d94c503fb77c9aa9c3739be671,PodSandboxId:eb0e9822a0f4e6fa2ff72be57c5b6d30df9a5b7e0c7ee0c4cc4aae2a6faaf91a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721413891286046511,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4lhkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92211e52-9a67-4b01-a4af-0769596d7103,},Annotations:map[string]string{io.kubernetes.container.hash: a2713f56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f7a66979140fe738b08e527164c2369f41fd064fa704fc9fee868074b4a452,PodSandboxId:1f4d197a59cd5ff468ce92a597b19649b7d39296e988d499593484af548c9fb9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172141388
7206508706,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkjns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f97d4b6b-eb80-46c3-9513-ef6e9af5318e,},Annotations:map[string]string{io.kubernetes.container.hash: b6a228cf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e60da0dad65c5ab983f0fbc1d10420e853f41839ab866eaaf09371aaf0491f8d,PodSandboxId:e951c3e25d5b2fc3a0068871912a7cdcb05b7001b553b74b20fd53a2deccd1c8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17214138697
00517880,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8de71ad79df9fe13615bfb38834f9d84,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00761e9db17ecfad96376a93ad36118da3b611230214d3f6b730aed8037c9f74,PodSandboxId:9a07a59b0e0e4715d5e80dcbe88bfacb960a6a0b4f96bd8f70828ab198e71bf5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721413866852487199,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a30a862431730cad16e447d55a4fb821,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7d66135b54e45d783390d42b37d07758fbd3c77fef3397a9f916668a18a5cdf,PodSandboxId:4d82589b51ae72d8e5c19644bffc61640df7d2b05cf61a96a43b290166e1b65c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721413866853583736,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d46eaeb953414d3529b8c9640aa3b7e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6b1a8807,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65cdebd7fead61555f3eb344ede3b87169b46a3d890899b185210a547160c51d,PodSandboxId:4473d4683f2e13119e84131069ee05447abc7ffc6505f1cadf4505b54f90ee42,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721413866792666659,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.po
d.name: etcd-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dffd3f3edf9c9be1c68e639689e4e06a,},Annotations:map[string]string{io.kubernetes.container.hash: 6829aeb5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d91756bd0168a43e8b6f1ae9046710c68f13345e57b69b224b80195614364f,PodSandboxId:30c0f83ae920504360c74ec553bb005959425ea38a43c099c740baac94232898,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721413866820045920,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 295bf86c7591ac85ae0841105f8684b3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3a4c47df-a59d-45fb-873e-6b93f5eb7097 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:37:35 ha-269108 crio[681]: time="2024-07-19 18:37:35.058209043Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=206643b8-8bcf-4e44-9603-1ffda87b1466 name=/runtime.v1.RuntimeService/Version
	Jul 19 18:37:35 ha-269108 crio[681]: time="2024-07-19 18:37:35.058303770Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=206643b8-8bcf-4e44-9603-1ffda87b1466 name=/runtime.v1.RuntimeService/Version
	Jul 19 18:37:35 ha-269108 crio[681]: time="2024-07-19 18:37:35.060469077Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dcb84459-7ada-4aea-9cce-1a08cfd76890 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 18:37:35 ha-269108 crio[681]: time="2024-07-19 18:37:35.061299148Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721414255061270032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dcb84459-7ada-4aea-9cce-1a08cfd76890 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 18:37:35 ha-269108 crio[681]: time="2024-07-19 18:37:35.062072458Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8bfbfd28-e285-4aa9-a77e-e6c6a55fd743 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:37:35 ha-269108 crio[681]: time="2024-07-19 18:37:35.062193529Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8bfbfd28-e285-4aa9-a77e-e6c6a55fd743 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:37:35 ha-269108 crio[681]: time="2024-07-19 18:37:35.062994510Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7246971f05346f2dc79fb64d87f9e45c4dbe98d10104f0ef433c848235f3b80,PodSandboxId:8ce5a6a0852e27942f0fc5b9171cce34e14a87ce419545c3842e9353ab6324b8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721414041026619137,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8qqtr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c0699d4b-e052-40cf-b41c-ab604d64531c,},Annotations:map[string]string{io.kubernetes.container.hash: 5c041215,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a98e1c0fea9d11195dfc6c8ea7f24783d955e7e8b78f7ab0613caaaec2fdb2ec,PodSandboxId:6a81dda96a2e125ee2baa5c574c3f65c8ad4e3da7e8fb629549d2cae5fffeacb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721413903514218061,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38aaccd6-4fb9-4744-b502-261ebeaf476a,},Annotations:map[string]string{io.kubernetes.container.hash: 8461d301,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:160fdbf709fdfd0919bdc4cac02da0844e0b17c6e6272326ed4de2e8c824f16a,PodSandboxId:7a1d288c9d262c0228c5bc042b7dcf56aee7adfe173e152f2e29c3f98ad72afa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721413903448828320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-frb69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd2dc6fa-ab54-44e1-b3b1-3e63af76140b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a98894c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e3cdc51f7ed094ed5462f11f5cff75cb3d6d30639d4cb6e22c13a07f94a6433,PodSandboxId:3932d100e152d2ea42e6eb0df146808e82dc875697eb60499c0321a97ca99df2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721413903464875569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zxh8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1016c72c-a0
e0-41fe-8f64-1ba03d7ce1c8,},Annotations:map[string]string{io.kubernetes.container.hash: 7619a7d5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b0808457c5c75d11706b6ffd7814fd4e46226d94c503fb77c9aa9c3739be671,PodSandboxId:eb0e9822a0f4e6fa2ff72be57c5b6d30df9a5b7e0c7ee0c4cc4aae2a6faaf91a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721413891286046511,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4lhkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92211e52-9a67-4b01-a4af-0769596d7103,},Annotations:map[string]string{io.kubernetes.container.hash: a2713f56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f7a66979140fe738b08e527164c2369f41fd064fa704fc9fee868074b4a452,PodSandboxId:1f4d197a59cd5ff468ce92a597b19649b7d39296e988d499593484af548c9fb9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172141388
7206508706,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkjns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f97d4b6b-eb80-46c3-9513-ef6e9af5318e,},Annotations:map[string]string{io.kubernetes.container.hash: b6a228cf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e60da0dad65c5ab983f0fbc1d10420e853f41839ab866eaaf09371aaf0491f8d,PodSandboxId:e951c3e25d5b2fc3a0068871912a7cdcb05b7001b553b74b20fd53a2deccd1c8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17214138697
00517880,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8de71ad79df9fe13615bfb38834f9d84,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00761e9db17ecfad96376a93ad36118da3b611230214d3f6b730aed8037c9f74,PodSandboxId:9a07a59b0e0e4715d5e80dcbe88bfacb960a6a0b4f96bd8f70828ab198e71bf5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721413866852487199,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a30a862431730cad16e447d55a4fb821,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7d66135b54e45d783390d42b37d07758fbd3c77fef3397a9f916668a18a5cdf,PodSandboxId:4d82589b51ae72d8e5c19644bffc61640df7d2b05cf61a96a43b290166e1b65c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721413866853583736,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d46eaeb953414d3529b8c9640aa3b7e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6b1a8807,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65cdebd7fead61555f3eb344ede3b87169b46a3d890899b185210a547160c51d,PodSandboxId:4473d4683f2e13119e84131069ee05447abc7ffc6505f1cadf4505b54f90ee42,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721413866792666659,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.po
d.name: etcd-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dffd3f3edf9c9be1c68e639689e4e06a,},Annotations:map[string]string{io.kubernetes.container.hash: 6829aeb5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d91756bd0168a43e8b6f1ae9046710c68f13345e57b69b224b80195614364f,PodSandboxId:30c0f83ae920504360c74ec553bb005959425ea38a43c099c740baac94232898,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721413866820045920,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 295bf86c7591ac85ae0841105f8684b3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8bfbfd28-e285-4aa9-a77e-e6c6a55fd743 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:37:35 ha-269108 crio[681]: time="2024-07-19 18:37:35.098255235Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=39fbd64e-8156-4547-be75-4459d64c05fa name=/runtime.v1.RuntimeService/Version
	Jul 19 18:37:35 ha-269108 crio[681]: time="2024-07-19 18:37:35.098338013Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=39fbd64e-8156-4547-be75-4459d64c05fa name=/runtime.v1.RuntimeService/Version
	Jul 19 18:37:35 ha-269108 crio[681]: time="2024-07-19 18:37:35.099278699Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d02a476d-a5e3-448f-a051-bd9c0c7b863b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 18:37:35 ha-269108 crio[681]: time="2024-07-19 18:37:35.099713030Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721414255099693012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d02a476d-a5e3-448f-a051-bd9c0c7b863b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 18:37:35 ha-269108 crio[681]: time="2024-07-19 18:37:35.100187769Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f5388f99-b576-417c-9d92-1c27270e1340 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:37:35 ha-269108 crio[681]: time="2024-07-19 18:37:35.100381669Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f5388f99-b576-417c-9d92-1c27270e1340 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:37:35 ha-269108 crio[681]: time="2024-07-19 18:37:35.100604651Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7246971f05346f2dc79fb64d87f9e45c4dbe98d10104f0ef433c848235f3b80,PodSandboxId:8ce5a6a0852e27942f0fc5b9171cce34e14a87ce419545c3842e9353ab6324b8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721414041026619137,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8qqtr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c0699d4b-e052-40cf-b41c-ab604d64531c,},Annotations:map[string]string{io.kubernetes.container.hash: 5c041215,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a98e1c0fea9d11195dfc6c8ea7f24783d955e7e8b78f7ab0613caaaec2fdb2ec,PodSandboxId:6a81dda96a2e125ee2baa5c574c3f65c8ad4e3da7e8fb629549d2cae5fffeacb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721413903514218061,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38aaccd6-4fb9-4744-b502-261ebeaf476a,},Annotations:map[string]string{io.kubernetes.container.hash: 8461d301,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:160fdbf709fdfd0919bdc4cac02da0844e0b17c6e6272326ed4de2e8c824f16a,PodSandboxId:7a1d288c9d262c0228c5bc042b7dcf56aee7adfe173e152f2e29c3f98ad72afa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721413903448828320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-frb69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd2dc6fa-ab54-44e1-b3b1-3e63af76140b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a98894c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e3cdc51f7ed094ed5462f11f5cff75cb3d6d30639d4cb6e22c13a07f94a6433,PodSandboxId:3932d100e152d2ea42e6eb0df146808e82dc875697eb60499c0321a97ca99df2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721413903464875569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zxh8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1016c72c-a0
e0-41fe-8f64-1ba03d7ce1c8,},Annotations:map[string]string{io.kubernetes.container.hash: 7619a7d5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b0808457c5c75d11706b6ffd7814fd4e46226d94c503fb77c9aa9c3739be671,PodSandboxId:eb0e9822a0f4e6fa2ff72be57c5b6d30df9a5b7e0c7ee0c4cc4aae2a6faaf91a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721413891286046511,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4lhkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92211e52-9a67-4b01-a4af-0769596d7103,},Annotations:map[string]string{io.kubernetes.container.hash: a2713f56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f7a66979140fe738b08e527164c2369f41fd064fa704fc9fee868074b4a452,PodSandboxId:1f4d197a59cd5ff468ce92a597b19649b7d39296e988d499593484af548c9fb9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172141388
7206508706,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkjns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f97d4b6b-eb80-46c3-9513-ef6e9af5318e,},Annotations:map[string]string{io.kubernetes.container.hash: b6a228cf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e60da0dad65c5ab983f0fbc1d10420e853f41839ab866eaaf09371aaf0491f8d,PodSandboxId:e951c3e25d5b2fc3a0068871912a7cdcb05b7001b553b74b20fd53a2deccd1c8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17214138697
00517880,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8de71ad79df9fe13615bfb38834f9d84,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00761e9db17ecfad96376a93ad36118da3b611230214d3f6b730aed8037c9f74,PodSandboxId:9a07a59b0e0e4715d5e80dcbe88bfacb960a6a0b4f96bd8f70828ab198e71bf5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721413866852487199,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a30a862431730cad16e447d55a4fb821,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7d66135b54e45d783390d42b37d07758fbd3c77fef3397a9f916668a18a5cdf,PodSandboxId:4d82589b51ae72d8e5c19644bffc61640df7d2b05cf61a96a43b290166e1b65c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721413866853583736,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d46eaeb953414d3529b8c9640aa3b7e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6b1a8807,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65cdebd7fead61555f3eb344ede3b87169b46a3d890899b185210a547160c51d,PodSandboxId:4473d4683f2e13119e84131069ee05447abc7ffc6505f1cadf4505b54f90ee42,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721413866792666659,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.po
d.name: etcd-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dffd3f3edf9c9be1c68e639689e4e06a,},Annotations:map[string]string{io.kubernetes.container.hash: 6829aeb5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d91756bd0168a43e8b6f1ae9046710c68f13345e57b69b224b80195614364f,PodSandboxId:30c0f83ae920504360c74ec553bb005959425ea38a43c099c740baac94232898,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721413866820045920,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 295bf86c7591ac85ae0841105f8684b3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f5388f99-b576-417c-9d92-1c27270e1340 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:37:35 ha-269108 crio[681]: time="2024-07-19 18:37:35.133801317Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9b58aac3-457d-43dc-ac29-5dcd009d23fe name=/runtime.v1.RuntimeService/Version
	Jul 19 18:37:35 ha-269108 crio[681]: time="2024-07-19 18:37:35.133870576Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9b58aac3-457d-43dc-ac29-5dcd009d23fe name=/runtime.v1.RuntimeService/Version
	Jul 19 18:37:35 ha-269108 crio[681]: time="2024-07-19 18:37:35.136907984Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ae36a749-c0f3-4181-9565-8877009b2b93 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 18:37:35 ha-269108 crio[681]: time="2024-07-19 18:37:35.137432077Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721414255137406274,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ae36a749-c0f3-4181-9565-8877009b2b93 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 18:37:35 ha-269108 crio[681]: time="2024-07-19 18:37:35.138065045Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0a70ef6a-513a-47c7-b3e6-01caa16b3158 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:37:35 ha-269108 crio[681]: time="2024-07-19 18:37:35.138118817Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0a70ef6a-513a-47c7-b3e6-01caa16b3158 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:37:35 ha-269108 crio[681]: time="2024-07-19 18:37:35.138397655Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7246971f05346f2dc79fb64d87f9e45c4dbe98d10104f0ef433c848235f3b80,PodSandboxId:8ce5a6a0852e27942f0fc5b9171cce34e14a87ce419545c3842e9353ab6324b8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721414041026619137,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8qqtr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c0699d4b-e052-40cf-b41c-ab604d64531c,},Annotations:map[string]string{io.kubernetes.container.hash: 5c041215,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a98e1c0fea9d11195dfc6c8ea7f24783d955e7e8b78f7ab0613caaaec2fdb2ec,PodSandboxId:6a81dda96a2e125ee2baa5c574c3f65c8ad4e3da7e8fb629549d2cae5fffeacb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721413903514218061,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38aaccd6-4fb9-4744-b502-261ebeaf476a,},Annotations:map[string]string{io.kubernetes.container.hash: 8461d301,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:160fdbf709fdfd0919bdc4cac02da0844e0b17c6e6272326ed4de2e8c824f16a,PodSandboxId:7a1d288c9d262c0228c5bc042b7dcf56aee7adfe173e152f2e29c3f98ad72afa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721413903448828320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-frb69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd2dc6fa-ab54-44e1-b3b1-3e63af76140b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a98894c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e3cdc51f7ed094ed5462f11f5cff75cb3d6d30639d4cb6e22c13a07f94a6433,PodSandboxId:3932d100e152d2ea42e6eb0df146808e82dc875697eb60499c0321a97ca99df2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721413903464875569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zxh8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1016c72c-a0
e0-41fe-8f64-1ba03d7ce1c8,},Annotations:map[string]string{io.kubernetes.container.hash: 7619a7d5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b0808457c5c75d11706b6ffd7814fd4e46226d94c503fb77c9aa9c3739be671,PodSandboxId:eb0e9822a0f4e6fa2ff72be57c5b6d30df9a5b7e0c7ee0c4cc4aae2a6faaf91a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721413891286046511,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4lhkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92211e52-9a67-4b01-a4af-0769596d7103,},Annotations:map[string]string{io.kubernetes.container.hash: a2713f56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f7a66979140fe738b08e527164c2369f41fd064fa704fc9fee868074b4a452,PodSandboxId:1f4d197a59cd5ff468ce92a597b19649b7d39296e988d499593484af548c9fb9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172141388
7206508706,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkjns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f97d4b6b-eb80-46c3-9513-ef6e9af5318e,},Annotations:map[string]string{io.kubernetes.container.hash: b6a228cf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e60da0dad65c5ab983f0fbc1d10420e853f41839ab866eaaf09371aaf0491f8d,PodSandboxId:e951c3e25d5b2fc3a0068871912a7cdcb05b7001b553b74b20fd53a2deccd1c8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17214138697
00517880,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8de71ad79df9fe13615bfb38834f9d84,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00761e9db17ecfad96376a93ad36118da3b611230214d3f6b730aed8037c9f74,PodSandboxId:9a07a59b0e0e4715d5e80dcbe88bfacb960a6a0b4f96bd8f70828ab198e71bf5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721413866852487199,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a30a862431730cad16e447d55a4fb821,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7d66135b54e45d783390d42b37d07758fbd3c77fef3397a9f916668a18a5cdf,PodSandboxId:4d82589b51ae72d8e5c19644bffc61640df7d2b05cf61a96a43b290166e1b65c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721413866853583736,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d46eaeb953414d3529b8c9640aa3b7e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6b1a8807,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65cdebd7fead61555f3eb344ede3b87169b46a3d890899b185210a547160c51d,PodSandboxId:4473d4683f2e13119e84131069ee05447abc7ffc6505f1cadf4505b54f90ee42,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721413866792666659,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.po
d.name: etcd-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dffd3f3edf9c9be1c68e639689e4e06a,},Annotations:map[string]string{io.kubernetes.container.hash: 6829aeb5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d91756bd0168a43e8b6f1ae9046710c68f13345e57b69b224b80195614364f,PodSandboxId:30c0f83ae920504360c74ec553bb005959425ea38a43c099c740baac94232898,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721413866820045920,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 295bf86c7591ac85ae0841105f8684b3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0a70ef6a-513a-47c7-b3e6-01caa16b3158 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f7246971f0534       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   8ce5a6a0852e2       busybox-fc5497c4f-8qqtr
	a98e1c0fea9d1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   6a81dda96a2e1       storage-provisioner
	4e3cdc51f7ed0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   3932d100e152d       coredns-7db6d8ff4d-zxh8h
	160fdbf709fdf       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   7a1d288c9d262       coredns-7db6d8ff4d-frb69
	1b0808457c5c7       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    6 minutes ago       Running             kindnet-cni               0                   eb0e9822a0f4e       kindnet-4lhkr
	61f7a66979140       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      6 minutes ago       Running             kube-proxy                0                   1f4d197a59cd5       kube-proxy-pkjns
	e60da0dad65c5       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   e951c3e25d5b2       kube-vip-ha-269108
	d7d66135b54e4       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      6 minutes ago       Running             kube-apiserver            0                   4d82589b51ae7       kube-apiserver-ha-269108
	00761e9db17ec       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      6 minutes ago       Running             kube-scheduler            0                   9a07a59b0e0e4       kube-scheduler-ha-269108
	c3d91756bd016       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      6 minutes ago       Running             kube-controller-manager   0                   30c0f83ae9205       kube-controller-manager-ha-269108
	65cdebd7fead6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   4473d4683f2e1       etcd-ha-269108
	
	
	==> coredns [160fdbf709fdfd0919bdc4cac02da0844e0b17c6e6272326ed4de2e8c824f16a] <==
	[INFO] 10.244.0.4:33226 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000203273s
	[INFO] 10.244.0.4:44058 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00016506s
	[INFO] 10.244.0.4:59413 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009965799s
	[INFO] 10.244.0.4:35057 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000093714s
	[INFO] 10.244.0.4:60700 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103292s
	[INFO] 10.244.1.2:34140 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001831088s
	[INFO] 10.244.1.2:55673 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000078159s
	[INFO] 10.244.1.2:48518 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000715s
	[INFO] 10.244.1.2:60538 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070669s
	[INFO] 10.244.2.2:41729 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000195097s
	[INFO] 10.244.2.2:54527 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000110807s
	[INFO] 10.244.0.4:56953 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000073764s
	[INFO] 10.244.0.4:52941 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000231699s
	[INFO] 10.244.1.2:55601 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109961s
	[INFO] 10.244.1.2:35949 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100008s
	[INFO] 10.244.1.2:37921 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013057s
	[INFO] 10.244.2.2:46987 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000135522s
	[INFO] 10.244.2.2:49412 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111732s
	[INFO] 10.244.0.4:60902 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000080561s
	[INFO] 10.244.0.4:46371 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000144185s
	[INFO] 10.244.1.2:42114 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000113886s
	[INFO] 10.244.1.2:42251 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000112767s
	[INFO] 10.244.1.2:32914 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121034s
	[INFO] 10.244.2.2:44583 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109669s
	[INFO] 10.244.2.2:43221 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000085803s
	
	
	==> coredns [4e3cdc51f7ed094ed5462f11f5cff75cb3d6d30639d4cb6e22c13a07f94a6433] <==
	[INFO] 10.244.2.2:37253 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000085955s
	[INFO] 10.244.2.2:39927 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001720898s
	[INFO] 10.244.0.4:36135 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114066s
	[INFO] 10.244.0.4:39115 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004365932s
	[INFO] 10.244.0.4:43055 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088303s
	[INFO] 10.244.1.2:42061 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106711s
	[INFO] 10.244.1.2:53460 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001350422s
	[INFO] 10.244.1.2:37958 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000120338s
	[INFO] 10.244.1.2:41663 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000057406s
	[INFO] 10.244.2.2:39895 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001764061s
	[INFO] 10.244.2.2:32797 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000140596s
	[INFO] 10.244.2.2:41374 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014218s
	[INFO] 10.244.2.2:38111 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001248331s
	[INFO] 10.244.2.2:40563 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000167068s
	[INFO] 10.244.2.2:35228 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000156257s
	[INFO] 10.244.0.4:60265 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000081089s
	[INFO] 10.244.0.4:46991 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004503s
	[INFO] 10.244.1.2:54120 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097136s
	[INFO] 10.244.2.2:36674 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166666s
	[INFO] 10.244.2.2:45715 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115611s
	[INFO] 10.244.0.4:49067 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123058s
	[INFO] 10.244.0.4:53779 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000117437s
	[INFO] 10.244.1.2:35193 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133641s
	[INFO] 10.244.2.2:48409 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000071901s
	[INFO] 10.244.2.2:34803 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000085512s
	
	
	==> describe nodes <==
	Name:               ha-269108
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-269108
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221
	                    minikube.k8s.io/name=ha-269108
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T18_31_14_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 18:31:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-269108
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 18:37:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 18:34:17 +0000   Fri, 19 Jul 2024 18:31:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 18:34:17 +0000   Fri, 19 Jul 2024 18:31:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 18:34:17 +0000   Fri, 19 Jul 2024 18:31:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 18:34:17 +0000   Fri, 19 Jul 2024 18:31:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.163
	  Hostname:    ha-269108
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e79eb460340347cf89b27475ae92f17a
	  System UUID:                e79eb460-3403-47cf-89b2-7475ae92f17a
	  Boot ID:                    54c61684-08fe-4593-a142-b4110f9ebe97
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8qqtr              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 coredns-7db6d8ff4d-frb69             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m9s
	  kube-system                 coredns-7db6d8ff4d-zxh8h             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m9s
	  kube-system                 etcd-ha-269108                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m22s
	  kube-system                 kindnet-4lhkr                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m9s
	  kube-system                 kube-apiserver-ha-269108             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m22s
	  kube-system                 kube-controller-manager-ha-269108    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m22s
	  kube-system                 kube-proxy-pkjns                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m9s
	  kube-system                 kube-scheduler-ha-269108             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m22s
	  kube-system                 kube-vip-ha-269108                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m22s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m7s   kube-proxy       
	  Normal  Starting                 6m22s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m22s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m22s  kubelet          Node ha-269108 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m22s  kubelet          Node ha-269108 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m22s  kubelet          Node ha-269108 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m9s   node-controller  Node ha-269108 event: Registered Node ha-269108 in Controller
	  Normal  NodeReady                5m53s  kubelet          Node ha-269108 status is now: NodeReady
	  Normal  RegisteredNode           5m5s   node-controller  Node ha-269108 event: Registered Node ha-269108 in Controller
	  Normal  RegisteredNode           3m48s  node-controller  Node ha-269108 event: Registered Node ha-269108 in Controller
	
	
	Name:               ha-269108-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-269108-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221
	                    minikube.k8s.io/name=ha-269108
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T18_32_16_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 18:32:13 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-269108-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 18:35:06 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 19 Jul 2024 18:34:16 +0000   Fri, 19 Jul 2024 18:35:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 19 Jul 2024 18:34:16 +0000   Fri, 19 Jul 2024 18:35:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 19 Jul 2024 18:34:16 +0000   Fri, 19 Jul 2024 18:35:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 19 Jul 2024 18:34:16 +0000   Fri, 19 Jul 2024 18:35:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.87
	  Hostname:    ha-269108-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bd8856bcc7e74f5db8563df427c95529
	  System UUID:                bd8856bc-c7e7-4f5d-b856-3df427c95529
	  Boot ID:                    f0c82245-88b6-4481-bd3f-181d903ffe04
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hds6j                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 etcd-ha-269108-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m20s
	  kube-system                 kindnet-fs5wk                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m22s
	  kube-system                 kube-apiserver-ha-269108-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m21s
	  kube-system                 kube-controller-manager-ha-269108-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m21s
	  kube-system                 kube-proxy-vr7bz                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m22s
	  kube-system                 kube-scheduler-ha-269108-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m21s
	  kube-system                 kube-vip-ha-269108-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m18s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m22s (x8 over 5m22s)  kubelet          Node ha-269108-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m22s (x8 over 5m22s)  kubelet          Node ha-269108-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m22s (x7 over 5m22s)  kubelet          Node ha-269108-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m19s                  node-controller  Node ha-269108-m02 event: Registered Node ha-269108-m02 in Controller
	  Normal  RegisteredNode           5m5s                   node-controller  Node ha-269108-m02 event: Registered Node ha-269108-m02 in Controller
	  Normal  RegisteredNode           3m48s                  node-controller  Node ha-269108-m02 event: Registered Node ha-269108-m02 in Controller
	  Normal  NodeNotReady             108s                   node-controller  Node ha-269108-m02 status is now: NodeNotReady
	
	
	Name:               ha-269108-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-269108-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221
	                    minikube.k8s.io/name=ha-269108
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T18_33_32_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 18:33:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-269108-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 18:37:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 18:34:30 +0000   Fri, 19 Jul 2024 18:33:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 18:34:30 +0000   Fri, 19 Jul 2024 18:33:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 18:34:30 +0000   Fri, 19 Jul 2024 18:33:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 18:34:30 +0000   Fri, 19 Jul 2024 18:33:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.111
	  Hostname:    ha-269108-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c6e00e6f38a04bfdb40d7ea3e2c16e0c
	  System UUID:                c6e00e6f-38a0-4bfd-b40d-7ea3e2c16e0c
	  Boot ID:                    42ac80d5-69e5-48d1-95cb-2df50a1cc367
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-lrphf                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 etcd-ha-269108-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m5s
	  kube-system                 kindnet-bmcvm                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m7s
	  kube-system                 kube-apiserver-ha-269108-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	  kube-system                 kube-controller-manager-ha-269108-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	  kube-system                 kube-proxy-9zpp2                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m7s
	  kube-system                 kube-scheduler-ha-269108-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	  kube-system                 kube-vip-ha-269108-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m2s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  4m7s (x8 over 4m7s)  kubelet          Node ha-269108-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m7s (x8 over 4m7s)  kubelet          Node ha-269108-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m7s (x7 over 4m7s)  kubelet          Node ha-269108-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m5s                 node-controller  Node ha-269108-m03 event: Registered Node ha-269108-m03 in Controller
	  Normal  RegisteredNode           4m4s                 node-controller  Node ha-269108-m03 event: Registered Node ha-269108-m03 in Controller
	  Normal  RegisteredNode           3m48s                node-controller  Node ha-269108-m03 event: Registered Node ha-269108-m03 in Controller
	
	
	Name:               ha-269108-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-269108-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221
	                    minikube.k8s.io/name=ha-269108
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T18_34_34_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 18:34:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-269108-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 18:37:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 18:35:04 +0000   Fri, 19 Jul 2024 18:34:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 18:35:04 +0000   Fri, 19 Jul 2024 18:34:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 18:35:04 +0000   Fri, 19 Jul 2024 18:34:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 18:35:04 +0000   Fri, 19 Jul 2024 18:34:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.17
	  Hostname:    ha-269108-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 db189232948d49d08567fb37977226c0
	  System UUID:                db189232-948d-49d0-8567-fb37977226c0
	  Boot ID:                    f229708f-8e4e-40a7-ad94-1f7eb3a46ee7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2cplc       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m1s
	  kube-system                 kube-proxy-qfjxp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m1s (x2 over 3m1s)  kubelet          Node ha-269108-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m1s (x2 over 3m1s)  kubelet          Node ha-269108-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m1s (x2 over 3m1s)  kubelet          Node ha-269108-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m                   node-controller  Node ha-269108-m04 event: Registered Node ha-269108-m04 in Controller
	  Normal  RegisteredNode           2m59s                node-controller  Node ha-269108-m04 event: Registered Node ha-269108-m04 in Controller
	  Normal  RegisteredNode           2m58s                node-controller  Node ha-269108-m04 event: Registered Node ha-269108-m04 in Controller
	  Normal  NodeReady                2m41s                kubelet          Node ha-269108-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul19 18:30] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050935] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037004] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.447258] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.741619] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.433325] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000011] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +11.238779] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.059116] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053977] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.173547] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.128447] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.273873] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[Jul19 18:31] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +3.987459] systemd-fstab-generator[944]: Ignoring "noauto" option for root device
	[  +0.062161] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.240611] systemd-fstab-generator[1364]: Ignoring "noauto" option for root device
	[  +0.081692] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.045525] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.095803] kauditd_printk_skb: 36 callbacks suppressed
	[Jul19 18:32] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [65cdebd7fead61555f3eb344ede3b87169b46a3d890899b185210a547160c51d] <==
	{"level":"warn","ts":"2024-07-19T18:37:34.958937Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:37:35.058442Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:37:35.158415Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:37:35.258881Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:37:35.424823Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:37:35.432273Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:37:35.439257Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:37:35.443401Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:37:35.446631Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:37:35.457933Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:37:35.458675Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:37:35.464975Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:37:35.471478Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:37:35.475345Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:37:35.478876Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:37:35.486903Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:37:35.492605Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:37:35.498945Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:37:35.502641Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:37:35.505568Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:37:35.511348Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:37:35.517043Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:37:35.522503Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:37:35.525098Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:37:35.558832Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:37:35 up 7 min,  0 users,  load average: 0.17, 0.17, 0.09
	Linux ha-269108 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1b0808457c5c75d11706b6ffd7814fd4e46226d94c503fb77c9aa9c3739be671] <==
	I0719 18:37:02.394928       1 main.go:322] Node ha-269108-m04 has CIDR [10.244.3.0/24] 
	I0719 18:37:12.390697       1 main.go:295] Handling node with IPs: map[192.168.39.163:{}]
	I0719 18:37:12.390743       1 main.go:299] handling current node
	I0719 18:37:12.390758       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0719 18:37:12.390763       1 main.go:322] Node ha-269108-m02 has CIDR [10.244.1.0/24] 
	I0719 18:37:12.390894       1 main.go:295] Handling node with IPs: map[192.168.39.111:{}]
	I0719 18:37:12.390913       1 main.go:322] Node ha-269108-m03 has CIDR [10.244.2.0/24] 
	I0719 18:37:12.390985       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0719 18:37:12.390990       1 main.go:322] Node ha-269108-m04 has CIDR [10.244.3.0/24] 
	I0719 18:37:22.388834       1 main.go:295] Handling node with IPs: map[192.168.39.163:{}]
	I0719 18:37:22.388939       1 main.go:299] handling current node
	I0719 18:37:22.388994       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0719 18:37:22.389018       1 main.go:322] Node ha-269108-m02 has CIDR [10.244.1.0/24] 
	I0719 18:37:22.389221       1 main.go:295] Handling node with IPs: map[192.168.39.111:{}]
	I0719 18:37:22.389255       1 main.go:322] Node ha-269108-m03 has CIDR [10.244.2.0/24] 
	I0719 18:37:22.389342       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0719 18:37:22.389364       1 main.go:322] Node ha-269108-m04 has CIDR [10.244.3.0/24] 
	I0719 18:37:32.385988       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0719 18:37:32.386128       1 main.go:322] Node ha-269108-m02 has CIDR [10.244.1.0/24] 
	I0719 18:37:32.386421       1 main.go:295] Handling node with IPs: map[192.168.39.111:{}]
	I0719 18:37:32.386455       1 main.go:322] Node ha-269108-m03 has CIDR [10.244.2.0/24] 
	I0719 18:37:32.386527       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0719 18:37:32.386545       1 main.go:322] Node ha-269108-m04 has CIDR [10.244.3.0/24] 
	I0719 18:37:32.386603       1 main.go:295] Handling node with IPs: map[192.168.39.163:{}]
	I0719 18:37:32.386621       1 main.go:299] handling current node
	
	
	==> kube-apiserver [d7d66135b54e45d783390d42b37d07758fbd3c77fef3397a9f916668a18a5cdf] <==
	I0719 18:31:11.919680       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0719 18:31:13.112936       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0719 18:31:13.131569       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0719 18:31:13.320336       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0719 18:31:26.483972       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0719 18:31:26.553397       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0719 18:33:29.681799       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0719 18:33:29.681870       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0719 18:33:29.681912       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 5.742µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0719 18:33:29.683535       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0719 18:33:29.683721       1 timeout.go:142] post-timeout activity - time-elapsed: 2.081477ms, POST "/api/v1/namespaces/kube-system/events" result: <nil>
	E0719 18:34:02.629460       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46846: use of closed network connection
	E0719 18:34:02.795682       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46866: use of closed network connection
	E0719 18:34:02.974784       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46880: use of closed network connection
	E0719 18:34:03.169331       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46894: use of closed network connection
	E0719 18:34:03.359531       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46918: use of closed network connection
	E0719 18:34:03.716413       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46958: use of closed network connection
	E0719 18:34:03.881836       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46976: use of closed network connection
	E0719 18:34:04.055280       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47002: use of closed network connection
	E0719 18:34:04.324120       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47022: use of closed network connection
	E0719 18:34:04.486741       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47032: use of closed network connection
	E0719 18:34:04.671513       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47050: use of closed network connection
	E0719 18:34:04.834903       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47062: use of closed network connection
	E0719 18:34:05.025312       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47086: use of closed network connection
	E0719 18:34:05.196710       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47116: use of closed network connection
	
	
	==> kube-controller-manager [c3d91756bd0168a43e8b6f1ae9046710c68f13345e57b69b224b80195614364f] <==
	I0719 18:33:57.872313       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.69264ms"
	I0719 18:33:57.872572       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.58µs"
	I0719 18:33:57.894061       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="90.786µs"
	I0719 18:33:57.894883       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.077µs"
	I0719 18:33:57.902541       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.256µs"
	I0719 18:33:58.131958       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="225.373822ms"
	I0719 18:33:58.157966       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.887812ms"
	I0719 18:33:58.228064       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.944763ms"
	I0719 18:33:58.230254       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="486.306µs"
	I0719 18:33:58.291133       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.672145ms"
	I0719 18:33:58.291281       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.602µs"
	I0719 18:33:58.803730       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.691µs"
	I0719 18:34:01.668382       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.33585ms"
	I0719 18:34:01.669328       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.182µs"
	I0719 18:34:02.037972       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.125574ms"
	I0719 18:34:02.038220       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="114.779µs"
	E0719 18:34:34.060492       1 certificate_controller.go:146] Sync csr-j4jgq failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-j4jgq": the object has been modified; please apply your changes to the latest version and try again
	E0719 18:34:34.076540       1 certificate_controller.go:146] Sync csr-j4jgq failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-j4jgq": the object has been modified; please apply your changes to the latest version and try again
	I0719 18:34:34.333735       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-269108-m04\" does not exist"
	I0719 18:34:34.354537       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-269108-m04" podCIDRs=["10.244.3.0/24"]
	I0719 18:34:36.648737       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-269108-m04"
	I0719 18:34:54.964090       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-269108-m04"
	I0719 18:35:47.596523       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-269108-m04"
	I0719 18:35:47.632838       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.338485ms"
	I0719 18:35:47.634539       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.389µs"
	
	
	==> kube-proxy [61f7a66979140fe738b08e527164c2369f41fd064fa704fc9fee868074b4a452] <==
	I0719 18:31:27.588679       1 server_linux.go:69] "Using iptables proxy"
	I0719 18:31:27.622593       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.163"]
	I0719 18:31:27.666787       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 18:31:27.666836       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 18:31:27.666853       1 server_linux.go:165] "Using iptables Proxier"
	I0719 18:31:27.669703       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 18:31:27.669988       1 server.go:872] "Version info" version="v1.30.3"
	I0719 18:31:27.670014       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 18:31:27.672653       1 config.go:192] "Starting service config controller"
	I0719 18:31:27.672963       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 18:31:27.673019       1 config.go:101] "Starting endpoint slice config controller"
	I0719 18:31:27.673025       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 18:31:27.674382       1 config.go:319] "Starting node config controller"
	I0719 18:31:27.676374       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 18:31:27.773740       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 18:31:27.773795       1 shared_informer.go:320] Caches are synced for service config
	I0719 18:31:27.777532       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [00761e9db17ecfad96376a93ad36118da3b611230214d3f6b730aed8037c9f74] <==
	I0719 18:31:12.952590       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0719 18:33:28.922708       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-djpx2\": pod kube-proxy-djpx2 is already assigned to node \"ha-269108-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-djpx2" node="ha-269108-m03"
	E0719 18:33:28.922873       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-djpx2\": pod kube-proxy-djpx2 is already assigned to node \"ha-269108-m03\"" pod="kube-system/kube-proxy-djpx2"
	I0719 18:33:28.922939       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-djpx2" node="ha-269108-m03"
	E0719 18:33:28.998782       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-9zpp2\": pod kube-proxy-9zpp2 is already assigned to node \"ha-269108-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-9zpp2" node="ha-269108-m03"
	E0719 18:33:28.998854       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 395be6bf-b8bd-4848-8550-cbc2e647c238(kube-system/kube-proxy-9zpp2) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-9zpp2"
	E0719 18:33:28.998884       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-9zpp2\": pod kube-proxy-9zpp2 is already assigned to node \"ha-269108-m03\"" pod="kube-system/kube-proxy-9zpp2"
	I0719 18:33:28.998918       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-9zpp2" node="ha-269108-m03"
	I0719 18:33:57.789554       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="baf3cafd-8ad7-4c5c-8143-e17b5aa2aa06" pod="default/busybox-fc5497c4f-lrphf" assumedNode="ha-269108-m03" currentNode="ha-269108-m02"
	E0719 18:33:57.795876       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-lrphf\": pod busybox-fc5497c4f-lrphf is already assigned to node \"ha-269108-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-lrphf" node="ha-269108-m02"
	E0719 18:33:57.795937       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod baf3cafd-8ad7-4c5c-8143-e17b5aa2aa06(default/busybox-fc5497c4f-lrphf) was assumed on ha-269108-m02 but assigned to ha-269108-m03" pod="default/busybox-fc5497c4f-lrphf"
	E0719 18:33:57.795953       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-lrphf\": pod busybox-fc5497c4f-lrphf is already assigned to node \"ha-269108-m03\"" pod="default/busybox-fc5497c4f-lrphf"
	I0719 18:33:57.796029       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-lrphf" node="ha-269108-m03"
	E0719 18:34:34.397980       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-2cplc\": pod kindnet-2cplc is already assigned to node \"ha-269108-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-2cplc" node="ha-269108-m04"
	E0719 18:34:34.398063       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod edf968d4-a9f2-4b87-b9a7-bb5dfdd8ded6(kube-system/kindnet-2cplc) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-2cplc"
	E0719 18:34:34.398115       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-2cplc\": pod kindnet-2cplc is already assigned to node \"ha-269108-m04\"" pod="kube-system/kindnet-2cplc"
	I0719 18:34:34.398188       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-2cplc" node="ha-269108-m04"
	E0719 18:34:34.469601       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-tpwth\": pod kindnet-tpwth is already assigned to node \"ha-269108-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-tpwth" node="ha-269108-m04"
	E0719 18:34:34.469675       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 79321755-810f-4828-b33e-6b74811f37ce(kube-system/kindnet-tpwth) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-tpwth"
	E0719 18:34:34.469699       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-tpwth\": pod kindnet-tpwth is already assigned to node \"ha-269108-m04\"" pod="kube-system/kindnet-tpwth"
	I0719 18:34:34.469717       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-tpwth" node="ha-269108-m04"
	E0719 18:34:34.607244       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-p8sld\": pod kube-proxy-p8sld is already assigned to node \"ha-269108-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-p8sld" node="ha-269108-m04"
	E0719 18:34:34.607398       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod c4817e36-a7dc-4c67-86e6-496a1164dfc7(kube-system/kube-proxy-p8sld) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-p8sld"
	E0719 18:34:34.607511       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-p8sld\": pod kube-proxy-p8sld is already assigned to node \"ha-269108-m04\"" pod="kube-system/kube-proxy-p8sld"
	I0719 18:34:34.607580       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-p8sld" node="ha-269108-m04"
	
	
	==> kubelet <==
	Jul 19 18:33:57 ha-269108 kubelet[1371]: I0719 18:33:57.834452    1371 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-zxh8h" podStartSLOduration=151.834381183 podStartE2EDuration="2m31.834381183s" podCreationTimestamp="2024-07-19 18:31:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-19 18:31:44.512417121 +0000 UTC m=+31.418431184" watchObservedRunningTime="2024-07-19 18:33:57.834381183 +0000 UTC m=+164.740395250"
	Jul 19 18:33:57 ha-269108 kubelet[1371]: I0719 18:33:57.836109    1371 topology_manager.go:215] "Topology Admit Handler" podUID="c0699d4b-e052-40cf-b41c-ab604d64531c" podNamespace="default" podName="busybox-fc5497c4f-8qqtr"
	Jul 19 18:33:57 ha-269108 kubelet[1371]: I0719 18:33:57.926934    1371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bnml\" (UniqueName: \"kubernetes.io/projected/c0699d4b-e052-40cf-b41c-ab604d64531c-kube-api-access-2bnml\") pod \"busybox-fc5497c4f-8qqtr\" (UID: \"c0699d4b-e052-40cf-b41c-ab604d64531c\") " pod="default/busybox-fc5497c4f-8qqtr"
	Jul 19 18:34:02 ha-269108 kubelet[1371]: I0719 18:34:02.003108    1371 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-8qqtr" podStartSLOduration=2.3715196929999998 podStartE2EDuration="5.003028182s" podCreationTimestamp="2024-07-19 18:33:57 +0000 UTC" firstStartedPulling="2024-07-19 18:33:58.379529598 +0000 UTC m=+165.285543656" lastFinishedPulling="2024-07-19 18:34:01.011038093 +0000 UTC m=+167.917052145" observedRunningTime="2024-07-19 18:34:02.001665773 +0000 UTC m=+168.907679834" watchObservedRunningTime="2024-07-19 18:34:02.003028182 +0000 UTC m=+168.909042247"
	Jul 19 18:34:04 ha-269108 kubelet[1371]: E0719 18:34:04.486854    1371 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:49904->127.0.0.1:40697: write tcp 127.0.0.1:49904->127.0.0.1:40697: write: broken pipe
	Jul 19 18:34:13 ha-269108 kubelet[1371]: E0719 18:34:13.285224    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 18:34:13 ha-269108 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 18:34:13 ha-269108 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 18:34:13 ha-269108 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 18:34:13 ha-269108 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 18:35:13 ha-269108 kubelet[1371]: E0719 18:35:13.274705    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 18:35:13 ha-269108 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 18:35:13 ha-269108 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 18:35:13 ha-269108 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 18:35:13 ha-269108 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 18:36:13 ha-269108 kubelet[1371]: E0719 18:36:13.275329    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 18:36:13 ha-269108 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 18:36:13 ha-269108 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 18:36:13 ha-269108 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 18:36:13 ha-269108 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 18:37:13 ha-269108 kubelet[1371]: E0719 18:37:13.274973    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 18:37:13 ha-269108 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 18:37:13 ha-269108 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 18:37:13 ha-269108 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 18:37:13 ha-269108 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-269108 -n ha-269108
helpers_test.go:261: (dbg) Run:  kubectl --context ha-269108 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (58.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-269108 status -v=7 --alsologtostderr: exit status 3 (3.1937302s)

                                                
                                                
-- stdout --
	ha-269108
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-269108-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-269108-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-269108-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 18:37:40.036170   37943 out.go:291] Setting OutFile to fd 1 ...
	I0719 18:37:40.036415   37943 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:37:40.036424   37943 out.go:304] Setting ErrFile to fd 2...
	I0719 18:37:40.036428   37943 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:37:40.036585   37943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 18:37:40.036751   37943 out.go:298] Setting JSON to false
	I0719 18:37:40.036779   37943 mustload.go:65] Loading cluster: ha-269108
	I0719 18:37:40.036816   37943 notify.go:220] Checking for updates...
	I0719 18:37:40.037275   37943 config.go:182] Loaded profile config "ha-269108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:37:40.037295   37943 status.go:255] checking status of ha-269108 ...
	I0719 18:37:40.037723   37943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:40.037759   37943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:40.056980   37943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42263
	I0719 18:37:40.057390   37943 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:40.057985   37943 main.go:141] libmachine: Using API Version  1
	I0719 18:37:40.058015   37943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:40.058344   37943 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:40.058522   37943 main.go:141] libmachine: (ha-269108) Calling .GetState
	I0719 18:37:40.059812   37943 status.go:330] ha-269108 host status = "Running" (err=<nil>)
	I0719 18:37:40.059826   37943 host.go:66] Checking if "ha-269108" exists ...
	I0719 18:37:40.060245   37943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:40.060289   37943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:40.075345   37943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46039
	I0719 18:37:40.075723   37943 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:40.076240   37943 main.go:141] libmachine: Using API Version  1
	I0719 18:37:40.076264   37943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:40.076572   37943 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:40.076793   37943 main.go:141] libmachine: (ha-269108) Calling .GetIP
	I0719 18:37:40.080136   37943 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:37:40.080580   37943 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:37:40.080612   37943 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:37:40.080697   37943 host.go:66] Checking if "ha-269108" exists ...
	I0719 18:37:40.080986   37943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:40.081017   37943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:40.095328   37943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40357
	I0719 18:37:40.095685   37943 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:40.096194   37943 main.go:141] libmachine: Using API Version  1
	I0719 18:37:40.096212   37943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:40.096516   37943 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:40.096704   37943 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:37:40.096909   37943 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 18:37:40.096933   37943 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:37:40.099667   37943 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:37:40.100247   37943 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:37:40.100280   37943 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:37:40.100435   37943 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:37:40.100631   37943 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:37:40.100779   37943 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:37:40.100952   37943 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	I0719 18:37:40.178768   37943 ssh_runner.go:195] Run: systemctl --version
	I0719 18:37:40.185892   37943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 18:37:40.200465   37943 kubeconfig.go:125] found "ha-269108" server: "https://192.168.39.254:8443"
	I0719 18:37:40.200500   37943 api_server.go:166] Checking apiserver status ...
	I0719 18:37:40.200547   37943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 18:37:40.214019   37943 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1198/cgroup
	W0719 18:37:40.223516   37943 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1198/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 18:37:40.223561   37943 ssh_runner.go:195] Run: ls
	I0719 18:37:40.227567   37943 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 18:37:40.233321   37943 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 18:37:40.233341   37943 status.go:422] ha-269108 apiserver status = Running (err=<nil>)
	I0719 18:37:40.233349   37943 status.go:257] ha-269108 status: &{Name:ha-269108 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 18:37:40.233363   37943 status.go:255] checking status of ha-269108-m02 ...
	I0719 18:37:40.233697   37943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:40.233730   37943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:40.248448   37943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45669
	I0719 18:37:40.248819   37943 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:40.249283   37943 main.go:141] libmachine: Using API Version  1
	I0719 18:37:40.249308   37943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:40.249670   37943 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:40.249878   37943 main.go:141] libmachine: (ha-269108-m02) Calling .GetState
	I0719 18:37:40.251527   37943 status.go:330] ha-269108-m02 host status = "Running" (err=<nil>)
	I0719 18:37:40.251541   37943 host.go:66] Checking if "ha-269108-m02" exists ...
	I0719 18:37:40.251849   37943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:40.251888   37943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:40.266382   37943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40075
	I0719 18:37:40.266733   37943 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:40.267169   37943 main.go:141] libmachine: Using API Version  1
	I0719 18:37:40.267187   37943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:40.267483   37943 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:40.267653   37943 main.go:141] libmachine: (ha-269108-m02) Calling .GetIP
	I0719 18:37:40.270552   37943 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:37:40.270948   37943 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:37:40.270965   37943 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:37:40.271135   37943 host.go:66] Checking if "ha-269108-m02" exists ...
	I0719 18:37:40.271437   37943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:40.271482   37943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:40.285750   37943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45343
	I0719 18:37:40.286156   37943 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:40.286585   37943 main.go:141] libmachine: Using API Version  1
	I0719 18:37:40.286605   37943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:40.286893   37943 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:40.287077   37943 main.go:141] libmachine: (ha-269108-m02) Calling .DriverName
	I0719 18:37:40.287236   37943 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 18:37:40.287257   37943 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHHostname
	I0719 18:37:40.289650   37943 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:37:40.290036   37943 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:37:40.290060   37943 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:37:40.290179   37943 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHPort
	I0719 18:37:40.290376   37943 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:37:40.290543   37943 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHUsername
	I0719 18:37:40.290699   37943 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m02/id_rsa Username:docker}
	W0719 18:37:42.848071   37943 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.87:22: connect: no route to host
	W0719 18:37:42.848160   37943 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.87:22: connect: no route to host
	E0719 18:37:42.848176   37943 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.87:22: connect: no route to host
	I0719 18:37:42.848183   37943 status.go:257] ha-269108-m02 status: &{Name:ha-269108-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0719 18:37:42.848200   37943 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.87:22: connect: no route to host
	I0719 18:37:42.848209   37943 status.go:255] checking status of ha-269108-m03 ...
	I0719 18:37:42.848519   37943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:42.848557   37943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:42.863091   37943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46253
	I0719 18:37:42.863566   37943 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:42.864144   37943 main.go:141] libmachine: Using API Version  1
	I0719 18:37:42.864167   37943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:42.864498   37943 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:42.864735   37943 main.go:141] libmachine: (ha-269108-m03) Calling .GetState
	I0719 18:37:42.866277   37943 status.go:330] ha-269108-m03 host status = "Running" (err=<nil>)
	I0719 18:37:42.866295   37943 host.go:66] Checking if "ha-269108-m03" exists ...
	I0719 18:37:42.866617   37943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:42.866663   37943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:42.882135   37943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36739
	I0719 18:37:42.882531   37943 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:42.882997   37943 main.go:141] libmachine: Using API Version  1
	I0719 18:37:42.883032   37943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:42.883473   37943 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:42.883671   37943 main.go:141] libmachine: (ha-269108-m03) Calling .GetIP
	I0719 18:37:42.886616   37943 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:37:42.887019   37943 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:37:42.887043   37943 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:37:42.887148   37943 host.go:66] Checking if "ha-269108-m03" exists ...
	I0719 18:37:42.887457   37943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:42.887504   37943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:42.901369   37943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43671
	I0719 18:37:42.901727   37943 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:42.902109   37943 main.go:141] libmachine: Using API Version  1
	I0719 18:37:42.902128   37943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:42.902421   37943 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:42.902611   37943 main.go:141] libmachine: (ha-269108-m03) Calling .DriverName
	I0719 18:37:42.902762   37943 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 18:37:42.902782   37943 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHHostname
	I0719 18:37:42.905585   37943 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:37:42.905969   37943 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:37:42.905996   37943 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:37:42.906128   37943 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHPort
	I0719 18:37:42.906302   37943 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:37:42.906509   37943 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHUsername
	I0719 18:37:42.906635   37943 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m03/id_rsa Username:docker}
	I0719 18:37:42.990975   37943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 18:37:43.004849   37943 kubeconfig.go:125] found "ha-269108" server: "https://192.168.39.254:8443"
	I0719 18:37:43.004878   37943 api_server.go:166] Checking apiserver status ...
	I0719 18:37:43.004924   37943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 18:37:43.017404   37943 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1542/cgroup
	W0719 18:37:43.026074   37943 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1542/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 18:37:43.026115   37943 ssh_runner.go:195] Run: ls
	I0719 18:37:43.029900   37943 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 18:37:43.034316   37943 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 18:37:43.034333   37943 status.go:422] ha-269108-m03 apiserver status = Running (err=<nil>)
	I0719 18:37:43.034340   37943 status.go:257] ha-269108-m03 status: &{Name:ha-269108-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 18:37:43.034358   37943 status.go:255] checking status of ha-269108-m04 ...
	I0719 18:37:43.034696   37943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:43.034735   37943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:43.049686   37943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42865
	I0719 18:37:43.050110   37943 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:43.050573   37943 main.go:141] libmachine: Using API Version  1
	I0719 18:37:43.050601   37943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:43.050885   37943 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:43.051088   37943 main.go:141] libmachine: (ha-269108-m04) Calling .GetState
	I0719 18:37:43.052594   37943 status.go:330] ha-269108-m04 host status = "Running" (err=<nil>)
	I0719 18:37:43.052607   37943 host.go:66] Checking if "ha-269108-m04" exists ...
	I0719 18:37:43.052901   37943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:43.052934   37943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:43.067406   37943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33829
	I0719 18:37:43.067782   37943 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:43.068268   37943 main.go:141] libmachine: Using API Version  1
	I0719 18:37:43.068291   37943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:43.068618   37943 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:43.068799   37943 main.go:141] libmachine: (ha-269108-m04) Calling .GetIP
	I0719 18:37:43.071497   37943 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:37:43.071886   37943 main.go:141] libmachine: (ha-269108-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:c4:15", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:34:19 +0000 UTC Type:0 Mac:52:54:00:44:c4:15 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-269108-m04 Clientid:01:52:54:00:44:c4:15}
	I0719 18:37:43.071909   37943 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:37:43.072054   37943 host.go:66] Checking if "ha-269108-m04" exists ...
	I0719 18:37:43.072350   37943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:43.072424   37943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:43.087306   37943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34973
	I0719 18:37:43.087687   37943 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:43.088155   37943 main.go:141] libmachine: Using API Version  1
	I0719 18:37:43.088175   37943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:43.088467   37943 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:43.088636   37943 main.go:141] libmachine: (ha-269108-m04) Calling .DriverName
	I0719 18:37:43.088811   37943 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 18:37:43.088833   37943 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHHostname
	I0719 18:37:43.091224   37943 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:37:43.091598   37943 main.go:141] libmachine: (ha-269108-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:c4:15", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:34:19 +0000 UTC Type:0 Mac:52:54:00:44:c4:15 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-269108-m04 Clientid:01:52:54:00:44:c4:15}
	I0719 18:37:43.091625   37943 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:37:43.091775   37943 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHPort
	I0719 18:37:43.091933   37943 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHKeyPath
	I0719 18:37:43.092084   37943 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHUsername
	I0719 18:37:43.092209   37943 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m04/id_rsa Username:docker}
	I0719 18:37:43.175084   37943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 18:37:43.189557   37943 status.go:257] ha-269108-m04 status: &{Name:ha-269108-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-269108 status -v=7 --alsologtostderr: exit status 3 (4.962258968s)

                                                
                                                
-- stdout --
	ha-269108
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-269108-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-269108-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-269108-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 18:37:44.409781   38042 out.go:291] Setting OutFile to fd 1 ...
	I0719 18:37:44.410006   38042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:37:44.410015   38042 out.go:304] Setting ErrFile to fd 2...
	I0719 18:37:44.410019   38042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:37:44.410216   38042 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 18:37:44.410379   38042 out.go:298] Setting JSON to false
	I0719 18:37:44.410406   38042 mustload.go:65] Loading cluster: ha-269108
	I0719 18:37:44.410525   38042 notify.go:220] Checking for updates...
	I0719 18:37:44.410874   38042 config.go:182] Loaded profile config "ha-269108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:37:44.410891   38042 status.go:255] checking status of ha-269108 ...
	I0719 18:37:44.411278   38042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:44.411354   38042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:44.430849   38042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37761
	I0719 18:37:44.431314   38042 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:44.431927   38042 main.go:141] libmachine: Using API Version  1
	I0719 18:37:44.431971   38042 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:44.432304   38042 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:44.432550   38042 main.go:141] libmachine: (ha-269108) Calling .GetState
	I0719 18:37:44.434579   38042 status.go:330] ha-269108 host status = "Running" (err=<nil>)
	I0719 18:37:44.434596   38042 host.go:66] Checking if "ha-269108" exists ...
	I0719 18:37:44.434918   38042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:44.434960   38042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:44.449529   38042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36017
	I0719 18:37:44.449982   38042 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:44.450408   38042 main.go:141] libmachine: Using API Version  1
	I0719 18:37:44.450428   38042 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:44.450748   38042 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:44.450912   38042 main.go:141] libmachine: (ha-269108) Calling .GetIP
	I0719 18:37:44.453502   38042 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:37:44.453914   38042 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:37:44.453954   38042 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:37:44.454083   38042 host.go:66] Checking if "ha-269108" exists ...
	I0719 18:37:44.454404   38042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:44.454443   38042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:44.468595   38042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46369
	I0719 18:37:44.469005   38042 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:44.469407   38042 main.go:141] libmachine: Using API Version  1
	I0719 18:37:44.469432   38042 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:44.469739   38042 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:44.469911   38042 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:37:44.470099   38042 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 18:37:44.470124   38042 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:37:44.473035   38042 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:37:44.473427   38042 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:37:44.473454   38042 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:37:44.473607   38042 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:37:44.473787   38042 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:37:44.473927   38042 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:37:44.474075   38042 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	I0719 18:37:44.551048   38042 ssh_runner.go:195] Run: systemctl --version
	I0719 18:37:44.557411   38042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 18:37:44.573311   38042 kubeconfig.go:125] found "ha-269108" server: "https://192.168.39.254:8443"
	I0719 18:37:44.573335   38042 api_server.go:166] Checking apiserver status ...
	I0719 18:37:44.573362   38042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 18:37:44.587423   38042 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1198/cgroup
	W0719 18:37:44.596261   38042 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1198/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 18:37:44.596324   38042 ssh_runner.go:195] Run: ls
	I0719 18:37:44.600398   38042 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 18:37:44.606507   38042 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 18:37:44.606530   38042 status.go:422] ha-269108 apiserver status = Running (err=<nil>)
	I0719 18:37:44.606542   38042 status.go:257] ha-269108 status: &{Name:ha-269108 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 18:37:44.606566   38042 status.go:255] checking status of ha-269108-m02 ...
	I0719 18:37:44.606931   38042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:44.606970   38042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:44.622988   38042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33663
	I0719 18:37:44.623560   38042 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:44.624119   38042 main.go:141] libmachine: Using API Version  1
	I0719 18:37:44.624143   38042 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:44.624457   38042 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:44.624636   38042 main.go:141] libmachine: (ha-269108-m02) Calling .GetState
	I0719 18:37:44.626085   38042 status.go:330] ha-269108-m02 host status = "Running" (err=<nil>)
	I0719 18:37:44.626099   38042 host.go:66] Checking if "ha-269108-m02" exists ...
	I0719 18:37:44.626360   38042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:44.626392   38042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:44.641214   38042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42251
	I0719 18:37:44.641667   38042 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:44.642158   38042 main.go:141] libmachine: Using API Version  1
	I0719 18:37:44.642177   38042 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:44.642452   38042 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:44.642598   38042 main.go:141] libmachine: (ha-269108-m02) Calling .GetIP
	I0719 18:37:44.645082   38042 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:37:44.645509   38042 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:37:44.645529   38042 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:37:44.645719   38042 host.go:66] Checking if "ha-269108-m02" exists ...
	I0719 18:37:44.646110   38042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:44.646159   38042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:44.664141   38042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40501
	I0719 18:37:44.664653   38042 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:44.665099   38042 main.go:141] libmachine: Using API Version  1
	I0719 18:37:44.665123   38042 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:44.665354   38042 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:44.665645   38042 main.go:141] libmachine: (ha-269108-m02) Calling .DriverName
	I0719 18:37:44.665837   38042 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 18:37:44.665859   38042 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHHostname
	I0719 18:37:44.668822   38042 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:37:44.669197   38042 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:37:44.669230   38042 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:37:44.669484   38042 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHPort
	I0719 18:37:44.669677   38042 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:37:44.669870   38042 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHUsername
	I0719 18:37:44.670039   38042 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m02/id_rsa Username:docker}
	W0719 18:37:45.920106   38042 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.87:22: connect: no route to host
	I0719 18:37:45.920159   38042 retry.go:31] will retry after 198.130997ms: dial tcp 192.168.39.87:22: connect: no route to host
	W0719 18:37:48.992052   38042 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.87:22: connect: no route to host
	W0719 18:37:48.992136   38042 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.87:22: connect: no route to host
	E0719 18:37:48.992159   38042 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.87:22: connect: no route to host
	I0719 18:37:48.992172   38042 status.go:257] ha-269108-m02 status: &{Name:ha-269108-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0719 18:37:48.992196   38042 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.87:22: connect: no route to host
	I0719 18:37:48.992211   38042 status.go:255] checking status of ha-269108-m03 ...
	I0719 18:37:48.992532   38042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:48.992605   38042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:49.007165   38042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41683
	I0719 18:37:49.007618   38042 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:49.008125   38042 main.go:141] libmachine: Using API Version  1
	I0719 18:37:49.008145   38042 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:49.008441   38042 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:49.008599   38042 main.go:141] libmachine: (ha-269108-m03) Calling .GetState
	I0719 18:37:49.010028   38042 status.go:330] ha-269108-m03 host status = "Running" (err=<nil>)
	I0719 18:37:49.010044   38042 host.go:66] Checking if "ha-269108-m03" exists ...
	I0719 18:37:49.010443   38042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:49.010485   38042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:49.025237   38042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34349
	I0719 18:37:49.025631   38042 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:49.026113   38042 main.go:141] libmachine: Using API Version  1
	I0719 18:37:49.026141   38042 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:49.026424   38042 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:49.026594   38042 main.go:141] libmachine: (ha-269108-m03) Calling .GetIP
	I0719 18:37:49.029054   38042 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:37:49.029583   38042 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:37:49.029613   38042 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:37:49.029719   38042 host.go:66] Checking if "ha-269108-m03" exists ...
	I0719 18:37:49.030030   38042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:49.030064   38042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:49.045529   38042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35463
	I0719 18:37:49.045981   38042 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:49.046463   38042 main.go:141] libmachine: Using API Version  1
	I0719 18:37:49.046481   38042 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:49.046770   38042 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:49.046947   38042 main.go:141] libmachine: (ha-269108-m03) Calling .DriverName
	I0719 18:37:49.047139   38042 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 18:37:49.047160   38042 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHHostname
	I0719 18:37:49.049611   38042 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:37:49.050015   38042 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:37:49.050038   38042 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:37:49.050229   38042 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHPort
	I0719 18:37:49.050395   38042 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:37:49.050555   38042 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHUsername
	I0719 18:37:49.050730   38042 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m03/id_rsa Username:docker}
	I0719 18:37:49.134960   38042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 18:37:49.148297   38042 kubeconfig.go:125] found "ha-269108" server: "https://192.168.39.254:8443"
	I0719 18:37:49.148355   38042 api_server.go:166] Checking apiserver status ...
	I0719 18:37:49.148415   38042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 18:37:49.161935   38042 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1542/cgroup
	W0719 18:37:49.170958   38042 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1542/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 18:37:49.170998   38042 ssh_runner.go:195] Run: ls
	I0719 18:37:49.174773   38042 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 18:37:49.178925   38042 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 18:37:49.178945   38042 status.go:422] ha-269108-m03 apiserver status = Running (err=<nil>)
	I0719 18:37:49.178963   38042 status.go:257] ha-269108-m03 status: &{Name:ha-269108-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 18:37:49.178979   38042 status.go:255] checking status of ha-269108-m04 ...
	I0719 18:37:49.179355   38042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:49.179397   38042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:49.194379   38042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39061
	I0719 18:37:49.194812   38042 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:49.195226   38042 main.go:141] libmachine: Using API Version  1
	I0719 18:37:49.195245   38042 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:49.195518   38042 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:49.195683   38042 main.go:141] libmachine: (ha-269108-m04) Calling .GetState
	I0719 18:37:49.197425   38042 status.go:330] ha-269108-m04 host status = "Running" (err=<nil>)
	I0719 18:37:49.197441   38042 host.go:66] Checking if "ha-269108-m04" exists ...
	I0719 18:37:49.197757   38042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:49.197794   38042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:49.212480   38042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33479
	I0719 18:37:49.212840   38042 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:49.213273   38042 main.go:141] libmachine: Using API Version  1
	I0719 18:37:49.213291   38042 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:49.213593   38042 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:49.213764   38042 main.go:141] libmachine: (ha-269108-m04) Calling .GetIP
	I0719 18:37:49.216868   38042 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:37:49.217354   38042 main.go:141] libmachine: (ha-269108-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:c4:15", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:34:19 +0000 UTC Type:0 Mac:52:54:00:44:c4:15 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-269108-m04 Clientid:01:52:54:00:44:c4:15}
	I0719 18:37:49.217390   38042 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:37:49.217535   38042 host.go:66] Checking if "ha-269108-m04" exists ...
	I0719 18:37:49.217808   38042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:49.217840   38042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:49.232740   38042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33257
	I0719 18:37:49.233159   38042 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:49.233608   38042 main.go:141] libmachine: Using API Version  1
	I0719 18:37:49.233636   38042 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:49.233942   38042 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:49.234107   38042 main.go:141] libmachine: (ha-269108-m04) Calling .DriverName
	I0719 18:37:49.234290   38042 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 18:37:49.234333   38042 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHHostname
	I0719 18:37:49.236892   38042 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:37:49.237332   38042 main.go:141] libmachine: (ha-269108-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:c4:15", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:34:19 +0000 UTC Type:0 Mac:52:54:00:44:c4:15 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-269108-m04 Clientid:01:52:54:00:44:c4:15}
	I0719 18:37:49.237357   38042 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:37:49.237435   38042 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHPort
	I0719 18:37:49.237619   38042 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHKeyPath
	I0719 18:37:49.237778   38042 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHUsername
	I0719 18:37:49.237891   38042 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m04/id_rsa Username:docker}
	I0719 18:37:49.318495   38042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 18:37:49.331881   38042 status.go:257] ha-269108-m04 status: &{Name:ha-269108-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-269108 status -v=7 --alsologtostderr: exit status 3 (4.358084924s)

                                                
                                                
-- stdout --
	ha-269108
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-269108-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-269108-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-269108-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 18:37:51.418797   38166 out.go:291] Setting OutFile to fd 1 ...
	I0719 18:37:51.418933   38166 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:37:51.418942   38166 out.go:304] Setting ErrFile to fd 2...
	I0719 18:37:51.418946   38166 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:37:51.419122   38166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 18:37:51.419312   38166 out.go:298] Setting JSON to false
	I0719 18:37:51.419342   38166 mustload.go:65] Loading cluster: ha-269108
	I0719 18:37:51.419476   38166 notify.go:220] Checking for updates...
	I0719 18:37:51.419791   38166 config.go:182] Loaded profile config "ha-269108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:37:51.419808   38166 status.go:255] checking status of ha-269108 ...
	I0719 18:37:51.420215   38166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:51.420252   38166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:51.440875   38166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45265
	I0719 18:37:51.441295   38166 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:51.441887   38166 main.go:141] libmachine: Using API Version  1
	I0719 18:37:51.441907   38166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:51.442346   38166 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:51.442602   38166 main.go:141] libmachine: (ha-269108) Calling .GetState
	I0719 18:37:51.444483   38166 status.go:330] ha-269108 host status = "Running" (err=<nil>)
	I0719 18:37:51.444500   38166 host.go:66] Checking if "ha-269108" exists ...
	I0719 18:37:51.444835   38166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:51.444876   38166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:51.460732   38166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36133
	I0719 18:37:51.461175   38166 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:51.461653   38166 main.go:141] libmachine: Using API Version  1
	I0719 18:37:51.461673   38166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:51.461985   38166 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:51.462183   38166 main.go:141] libmachine: (ha-269108) Calling .GetIP
	I0719 18:37:51.464785   38166 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:37:51.465258   38166 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:37:51.465285   38166 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:37:51.465472   38166 host.go:66] Checking if "ha-269108" exists ...
	I0719 18:37:51.465756   38166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:51.465785   38166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:51.480228   38166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40171
	I0719 18:37:51.480704   38166 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:51.481203   38166 main.go:141] libmachine: Using API Version  1
	I0719 18:37:51.481222   38166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:51.481538   38166 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:51.481713   38166 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:37:51.481921   38166 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 18:37:51.481953   38166 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:37:51.484574   38166 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:37:51.484988   38166 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:37:51.485013   38166 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:37:51.485150   38166 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:37:51.485304   38166 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:37:51.485446   38166 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:37:51.485554   38166 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	I0719 18:37:51.566810   38166 ssh_runner.go:195] Run: systemctl --version
	I0719 18:37:51.572405   38166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 18:37:51.588549   38166 kubeconfig.go:125] found "ha-269108" server: "https://192.168.39.254:8443"
	I0719 18:37:51.588575   38166 api_server.go:166] Checking apiserver status ...
	I0719 18:37:51.588606   38166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 18:37:51.602466   38166 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1198/cgroup
	W0719 18:37:51.611796   38166 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1198/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 18:37:51.611867   38166 ssh_runner.go:195] Run: ls
	I0719 18:37:51.616013   38166 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 18:37:51.620072   38166 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 18:37:51.620096   38166 status.go:422] ha-269108 apiserver status = Running (err=<nil>)
	I0719 18:37:51.620108   38166 status.go:257] ha-269108 status: &{Name:ha-269108 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 18:37:51.620128   38166 status.go:255] checking status of ha-269108-m02 ...
	I0719 18:37:51.620468   38166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:51.620502   38166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:51.635720   38166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42661
	I0719 18:37:51.636194   38166 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:51.636691   38166 main.go:141] libmachine: Using API Version  1
	I0719 18:37:51.636713   38166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:51.637054   38166 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:51.637225   38166 main.go:141] libmachine: (ha-269108-m02) Calling .GetState
	I0719 18:37:51.638878   38166 status.go:330] ha-269108-m02 host status = "Running" (err=<nil>)
	I0719 18:37:51.638896   38166 host.go:66] Checking if "ha-269108-m02" exists ...
	I0719 18:37:51.639203   38166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:51.639235   38166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:51.654409   38166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41665
	I0719 18:37:51.654763   38166 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:51.655128   38166 main.go:141] libmachine: Using API Version  1
	I0719 18:37:51.655150   38166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:51.655457   38166 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:51.655682   38166 main.go:141] libmachine: (ha-269108-m02) Calling .GetIP
	I0719 18:37:51.658445   38166 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:37:51.658891   38166 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:37:51.658915   38166 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:37:51.659082   38166 host.go:66] Checking if "ha-269108-m02" exists ...
	I0719 18:37:51.659401   38166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:51.659461   38166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:51.674619   38166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34011
	I0719 18:37:51.675047   38166 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:51.675539   38166 main.go:141] libmachine: Using API Version  1
	I0719 18:37:51.675560   38166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:51.675933   38166 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:51.676188   38166 main.go:141] libmachine: (ha-269108-m02) Calling .DriverName
	I0719 18:37:51.676378   38166 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 18:37:51.676398   38166 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHHostname
	I0719 18:37:51.679037   38166 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:37:51.679449   38166 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:37:51.679484   38166 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:37:51.679614   38166 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHPort
	I0719 18:37:51.679774   38166 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:37:51.679941   38166 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHUsername
	I0719 18:37:51.680055   38166 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m02/id_rsa Username:docker}
	W0719 18:37:52.064137   38166 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.87:22: connect: no route to host
	I0719 18:37:52.064199   38166 retry.go:31] will retry after 267.45936ms: dial tcp 192.168.39.87:22: connect: no route to host
	W0719 18:37:55.392067   38166 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.87:22: connect: no route to host
	W0719 18:37:55.392146   38166 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.87:22: connect: no route to host
	E0719 18:37:55.392159   38166 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.87:22: connect: no route to host
	I0719 18:37:55.392168   38166 status.go:257] ha-269108-m02 status: &{Name:ha-269108-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0719 18:37:55.392195   38166 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.87:22: connect: no route to host
	I0719 18:37:55.392213   38166 status.go:255] checking status of ha-269108-m03 ...
	I0719 18:37:55.392532   38166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:55.392568   38166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:55.406970   38166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41011
	I0719 18:37:55.407433   38166 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:55.407966   38166 main.go:141] libmachine: Using API Version  1
	I0719 18:37:55.407988   38166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:55.408295   38166 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:55.408494   38166 main.go:141] libmachine: (ha-269108-m03) Calling .GetState
	I0719 18:37:55.410123   38166 status.go:330] ha-269108-m03 host status = "Running" (err=<nil>)
	I0719 18:37:55.410139   38166 host.go:66] Checking if "ha-269108-m03" exists ...
	I0719 18:37:55.410532   38166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:55.410582   38166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:55.425789   38166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44513
	I0719 18:37:55.426156   38166 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:55.426632   38166 main.go:141] libmachine: Using API Version  1
	I0719 18:37:55.426654   38166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:55.426933   38166 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:55.427128   38166 main.go:141] libmachine: (ha-269108-m03) Calling .GetIP
	I0719 18:37:55.429921   38166 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:37:55.430394   38166 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:37:55.430414   38166 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:37:55.430569   38166 host.go:66] Checking if "ha-269108-m03" exists ...
	I0719 18:37:55.430969   38166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:55.431009   38166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:55.445590   38166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43443
	I0719 18:37:55.446035   38166 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:55.446551   38166 main.go:141] libmachine: Using API Version  1
	I0719 18:37:55.446574   38166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:55.446862   38166 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:55.447037   38166 main.go:141] libmachine: (ha-269108-m03) Calling .DriverName
	I0719 18:37:55.447208   38166 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 18:37:55.447226   38166 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHHostname
	I0719 18:37:55.449880   38166 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:37:55.450287   38166 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:37:55.450327   38166 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:37:55.450426   38166 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHPort
	I0719 18:37:55.450611   38166 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:37:55.450748   38166 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHUsername
	I0719 18:37:55.450905   38166 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m03/id_rsa Username:docker}
	I0719 18:37:55.536146   38166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 18:37:55.550623   38166 kubeconfig.go:125] found "ha-269108" server: "https://192.168.39.254:8443"
	I0719 18:37:55.550646   38166 api_server.go:166] Checking apiserver status ...
	I0719 18:37:55.550682   38166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 18:37:55.562561   38166 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1542/cgroup
	W0719 18:37:55.570810   38166 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1542/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 18:37:55.570875   38166 ssh_runner.go:195] Run: ls
	I0719 18:37:55.574728   38166 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 18:37:55.581334   38166 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 18:37:55.581356   38166 status.go:422] ha-269108-m03 apiserver status = Running (err=<nil>)
	I0719 18:37:55.581365   38166 status.go:257] ha-269108-m03 status: &{Name:ha-269108-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 18:37:55.581384   38166 status.go:255] checking status of ha-269108-m04 ...
	I0719 18:37:55.581696   38166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:55.581732   38166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:55.596794   38166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42735
	I0719 18:37:55.597156   38166 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:55.597694   38166 main.go:141] libmachine: Using API Version  1
	I0719 18:37:55.597716   38166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:55.597992   38166 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:55.598193   38166 main.go:141] libmachine: (ha-269108-m04) Calling .GetState
	I0719 18:37:55.599866   38166 status.go:330] ha-269108-m04 host status = "Running" (err=<nil>)
	I0719 18:37:55.599881   38166 host.go:66] Checking if "ha-269108-m04" exists ...
	I0719 18:37:55.600162   38166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:55.600200   38166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:55.614483   38166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37393
	I0719 18:37:55.614838   38166 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:55.615303   38166 main.go:141] libmachine: Using API Version  1
	I0719 18:37:55.615337   38166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:55.615664   38166 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:55.615882   38166 main.go:141] libmachine: (ha-269108-m04) Calling .GetIP
	I0719 18:37:55.618642   38166 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:37:55.619089   38166 main.go:141] libmachine: (ha-269108-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:c4:15", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:34:19 +0000 UTC Type:0 Mac:52:54:00:44:c4:15 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-269108-m04 Clientid:01:52:54:00:44:c4:15}
	I0719 18:37:55.619118   38166 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:37:55.619266   38166 host.go:66] Checking if "ha-269108-m04" exists ...
	I0719 18:37:55.619558   38166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:55.619617   38166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:55.634352   38166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38783
	I0719 18:37:55.634849   38166 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:55.635366   38166 main.go:141] libmachine: Using API Version  1
	I0719 18:37:55.635390   38166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:55.635720   38166 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:55.635905   38166 main.go:141] libmachine: (ha-269108-m04) Calling .DriverName
	I0719 18:37:55.636100   38166 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 18:37:55.636124   38166 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHHostname
	I0719 18:37:55.639159   38166 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:37:55.639686   38166 main.go:141] libmachine: (ha-269108-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:c4:15", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:34:19 +0000 UTC Type:0 Mac:52:54:00:44:c4:15 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-269108-m04 Clientid:01:52:54:00:44:c4:15}
	I0719 18:37:55.639711   38166 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:37:55.639797   38166 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHPort
	I0719 18:37:55.639983   38166 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHKeyPath
	I0719 18:37:55.640162   38166 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHUsername
	I0719 18:37:55.640296   38166 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m04/id_rsa Username:docker}
	I0719 18:37:55.722511   38166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 18:37:55.735979   38166 status.go:257] ha-269108-m04 status: &{Name:ha-269108-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-269108 status -v=7 --alsologtostderr: exit status 3 (4.869553009s)

                                                
                                                
-- stdout --
	ha-269108
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-269108-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-269108-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-269108-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 18:37:57.314730   38266 out.go:291] Setting OutFile to fd 1 ...
	I0719 18:37:57.314973   38266 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:37:57.314982   38266 out.go:304] Setting ErrFile to fd 2...
	I0719 18:37:57.314986   38266 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:37:57.315172   38266 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 18:37:57.315337   38266 out.go:298] Setting JSON to false
	I0719 18:37:57.315364   38266 mustload.go:65] Loading cluster: ha-269108
	I0719 18:37:57.315420   38266 notify.go:220] Checking for updates...
	I0719 18:37:57.315908   38266 config.go:182] Loaded profile config "ha-269108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:37:57.315928   38266 status.go:255] checking status of ha-269108 ...
	I0719 18:37:57.316391   38266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:57.316447   38266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:57.335934   38266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42583
	I0719 18:37:57.336314   38266 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:57.336910   38266 main.go:141] libmachine: Using API Version  1
	I0719 18:37:57.336942   38266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:57.337279   38266 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:57.337519   38266 main.go:141] libmachine: (ha-269108) Calling .GetState
	I0719 18:37:57.338759   38266 status.go:330] ha-269108 host status = "Running" (err=<nil>)
	I0719 18:37:57.338774   38266 host.go:66] Checking if "ha-269108" exists ...
	I0719 18:37:57.339078   38266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:57.339128   38266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:57.353921   38266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33989
	I0719 18:37:57.354298   38266 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:57.354736   38266 main.go:141] libmachine: Using API Version  1
	I0719 18:37:57.354762   38266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:57.355067   38266 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:57.355269   38266 main.go:141] libmachine: (ha-269108) Calling .GetIP
	I0719 18:37:57.357931   38266 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:37:57.358331   38266 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:37:57.358363   38266 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:37:57.358526   38266 host.go:66] Checking if "ha-269108" exists ...
	I0719 18:37:57.358847   38266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:57.358878   38266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:57.373157   38266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37053
	I0719 18:37:57.373672   38266 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:57.374143   38266 main.go:141] libmachine: Using API Version  1
	I0719 18:37:57.374169   38266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:57.374462   38266 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:57.374639   38266 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:37:57.374840   38266 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 18:37:57.374862   38266 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:37:57.377510   38266 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:37:57.377937   38266 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:37:57.377971   38266 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:37:57.378103   38266 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:37:57.378267   38266 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:37:57.378388   38266 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:37:57.378545   38266 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	I0719 18:37:57.456122   38266 ssh_runner.go:195] Run: systemctl --version
	I0719 18:37:57.461778   38266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 18:37:57.475743   38266 kubeconfig.go:125] found "ha-269108" server: "https://192.168.39.254:8443"
	I0719 18:37:57.475769   38266 api_server.go:166] Checking apiserver status ...
	I0719 18:37:57.475799   38266 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 18:37:57.490188   38266 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1198/cgroup
	W0719 18:37:57.499006   38266 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1198/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 18:37:57.499081   38266 ssh_runner.go:195] Run: ls
	I0719 18:37:57.503366   38266 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 18:37:57.507499   38266 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 18:37:57.507518   38266 status.go:422] ha-269108 apiserver status = Running (err=<nil>)
	I0719 18:37:57.507526   38266 status.go:257] ha-269108 status: &{Name:ha-269108 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 18:37:57.507540   38266 status.go:255] checking status of ha-269108-m02 ...
	I0719 18:37:57.507828   38266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:57.507879   38266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:57.522803   38266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44025
	I0719 18:37:57.523202   38266 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:57.523629   38266 main.go:141] libmachine: Using API Version  1
	I0719 18:37:57.523650   38266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:57.523977   38266 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:57.524142   38266 main.go:141] libmachine: (ha-269108-m02) Calling .GetState
	I0719 18:37:57.525645   38266 status.go:330] ha-269108-m02 host status = "Running" (err=<nil>)
	I0719 18:37:57.525662   38266 host.go:66] Checking if "ha-269108-m02" exists ...
	I0719 18:37:57.525973   38266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:57.526006   38266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:57.540421   38266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46539
	I0719 18:37:57.540779   38266 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:57.541257   38266 main.go:141] libmachine: Using API Version  1
	I0719 18:37:57.541276   38266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:57.541584   38266 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:57.541773   38266 main.go:141] libmachine: (ha-269108-m02) Calling .GetIP
	I0719 18:37:57.544446   38266 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:37:57.544906   38266 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:37:57.544934   38266 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:37:57.545036   38266 host.go:66] Checking if "ha-269108-m02" exists ...
	I0719 18:37:57.545378   38266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:37:57.545421   38266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:37:57.560399   38266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38309
	I0719 18:37:57.560846   38266 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:37:57.561284   38266 main.go:141] libmachine: Using API Version  1
	I0719 18:37:57.561304   38266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:37:57.561622   38266 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:37:57.561798   38266 main.go:141] libmachine: (ha-269108-m02) Calling .DriverName
	I0719 18:37:57.562064   38266 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 18:37:57.562083   38266 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHHostname
	I0719 18:37:57.564664   38266 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:37:57.565119   38266 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:37:57.565144   38266 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:37:57.565295   38266 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHPort
	I0719 18:37:57.565477   38266 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:37:57.565654   38266 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHUsername
	I0719 18:37:57.565799   38266 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m02/id_rsa Username:docker}
	W0719 18:37:58.464192   38266 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.87:22: connect: no route to host
	I0719 18:37:58.464235   38266 retry.go:31] will retry after 263.765436ms: dial tcp 192.168.39.87:22: connect: no route to host
	W0719 18:38:01.792066   38266 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.87:22: connect: no route to host
	W0719 18:38:01.792176   38266 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.87:22: connect: no route to host
	E0719 18:38:01.792199   38266 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.87:22: connect: no route to host
	I0719 18:38:01.792209   38266 status.go:257] ha-269108-m02 status: &{Name:ha-269108-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0719 18:38:01.792230   38266 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.87:22: connect: no route to host
	I0719 18:38:01.792240   38266 status.go:255] checking status of ha-269108-m03 ...
	I0719 18:38:01.792703   38266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:01.792763   38266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:01.808677   38266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46371
	I0719 18:38:01.809116   38266 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:01.809655   38266 main.go:141] libmachine: Using API Version  1
	I0719 18:38:01.809683   38266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:01.809981   38266 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:01.810155   38266 main.go:141] libmachine: (ha-269108-m03) Calling .GetState
	I0719 18:38:01.811993   38266 status.go:330] ha-269108-m03 host status = "Running" (err=<nil>)
	I0719 18:38:01.812020   38266 host.go:66] Checking if "ha-269108-m03" exists ...
	I0719 18:38:01.812289   38266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:01.812319   38266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:01.827264   38266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37509
	I0719 18:38:01.827724   38266 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:01.828169   38266 main.go:141] libmachine: Using API Version  1
	I0719 18:38:01.828187   38266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:01.828493   38266 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:01.828666   38266 main.go:141] libmachine: (ha-269108-m03) Calling .GetIP
	I0719 18:38:01.831634   38266 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:38:01.832088   38266 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:38:01.832114   38266 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:38:01.832270   38266 host.go:66] Checking if "ha-269108-m03" exists ...
	I0719 18:38:01.832670   38266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:01.832713   38266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:01.849290   38266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44187
	I0719 18:38:01.849695   38266 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:01.850137   38266 main.go:141] libmachine: Using API Version  1
	I0719 18:38:01.850156   38266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:01.850464   38266 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:01.850673   38266 main.go:141] libmachine: (ha-269108-m03) Calling .DriverName
	I0719 18:38:01.850862   38266 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 18:38:01.850881   38266 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHHostname
	I0719 18:38:01.853657   38266 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:38:01.854106   38266 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:38:01.854126   38266 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:38:01.854270   38266 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHPort
	I0719 18:38:01.854473   38266 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:38:01.854651   38266 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHUsername
	I0719 18:38:01.854811   38266 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m03/id_rsa Username:docker}
	I0719 18:38:01.939609   38266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 18:38:01.952595   38266 kubeconfig.go:125] found "ha-269108" server: "https://192.168.39.254:8443"
	I0719 18:38:01.952624   38266 api_server.go:166] Checking apiserver status ...
	I0719 18:38:01.952657   38266 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 18:38:01.971897   38266 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1542/cgroup
	W0719 18:38:01.981626   38266 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1542/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 18:38:01.981696   38266 ssh_runner.go:195] Run: ls
	I0719 18:38:01.985831   38266 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 18:38:01.992210   38266 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 18:38:01.992230   38266 status.go:422] ha-269108-m03 apiserver status = Running (err=<nil>)
	I0719 18:38:01.992251   38266 status.go:257] ha-269108-m03 status: &{Name:ha-269108-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 18:38:01.992274   38266 status.go:255] checking status of ha-269108-m04 ...
	I0719 18:38:01.992543   38266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:01.992583   38266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:02.007606   38266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37471
	I0719 18:38:02.008009   38266 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:02.008518   38266 main.go:141] libmachine: Using API Version  1
	I0719 18:38:02.008545   38266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:02.008848   38266 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:02.009024   38266 main.go:141] libmachine: (ha-269108-m04) Calling .GetState
	I0719 18:38:02.010549   38266 status.go:330] ha-269108-m04 host status = "Running" (err=<nil>)
	I0719 18:38:02.010565   38266 host.go:66] Checking if "ha-269108-m04" exists ...
	I0719 18:38:02.010933   38266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:02.010977   38266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:02.025397   38266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32801
	I0719 18:38:02.025853   38266 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:02.026325   38266 main.go:141] libmachine: Using API Version  1
	I0719 18:38:02.026355   38266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:02.026669   38266 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:02.026871   38266 main.go:141] libmachine: (ha-269108-m04) Calling .GetIP
	I0719 18:38:02.029461   38266 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:38:02.029930   38266 main.go:141] libmachine: (ha-269108-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:c4:15", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:34:19 +0000 UTC Type:0 Mac:52:54:00:44:c4:15 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-269108-m04 Clientid:01:52:54:00:44:c4:15}
	I0719 18:38:02.029957   38266 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:38:02.030117   38266 host.go:66] Checking if "ha-269108-m04" exists ...
	I0719 18:38:02.030420   38266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:02.030466   38266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:02.044665   38266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38027
	I0719 18:38:02.045029   38266 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:02.045471   38266 main.go:141] libmachine: Using API Version  1
	I0719 18:38:02.045496   38266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:02.045911   38266 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:02.046097   38266 main.go:141] libmachine: (ha-269108-m04) Calling .DriverName
	I0719 18:38:02.046266   38266 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 18:38:02.046287   38266 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHHostname
	I0719 18:38:02.048936   38266 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:38:02.049450   38266 main.go:141] libmachine: (ha-269108-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:c4:15", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:34:19 +0000 UTC Type:0 Mac:52:54:00:44:c4:15 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-269108-m04 Clientid:01:52:54:00:44:c4:15}
	I0719 18:38:02.049472   38266 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:38:02.049616   38266 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHPort
	I0719 18:38:02.049802   38266 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHKeyPath
	I0719 18:38:02.049950   38266 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHUsername
	I0719 18:38:02.050063   38266 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m04/id_rsa Username:docker}
	I0719 18:38:02.130487   38266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 18:38:02.143417   38266 status.go:257] ha-269108-m04 status: &{Name:ha-269108-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-269108 status -v=7 --alsologtostderr: exit status 3 (3.737296049s)

                                                
                                                
-- stdout --
	ha-269108
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-269108-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-269108-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-269108-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 18:38:05.435949   38383 out.go:291] Setting OutFile to fd 1 ...
	I0719 18:38:05.436192   38383 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:38:05.436200   38383 out.go:304] Setting ErrFile to fd 2...
	I0719 18:38:05.436204   38383 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:38:05.436355   38383 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 18:38:05.436510   38383 out.go:298] Setting JSON to false
	I0719 18:38:05.436536   38383 mustload.go:65] Loading cluster: ha-269108
	I0719 18:38:05.436609   38383 notify.go:220] Checking for updates...
	I0719 18:38:05.436898   38383 config.go:182] Loaded profile config "ha-269108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:38:05.436911   38383 status.go:255] checking status of ha-269108 ...
	I0719 18:38:05.437303   38383 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:05.437349   38383 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:05.457487   38383 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38713
	I0719 18:38:05.457898   38383 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:05.458495   38383 main.go:141] libmachine: Using API Version  1
	I0719 18:38:05.458515   38383 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:05.458867   38383 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:05.459088   38383 main.go:141] libmachine: (ha-269108) Calling .GetState
	I0719 18:38:05.460785   38383 status.go:330] ha-269108 host status = "Running" (err=<nil>)
	I0719 18:38:05.460800   38383 host.go:66] Checking if "ha-269108" exists ...
	I0719 18:38:05.461065   38383 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:05.461097   38383 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:05.475627   38383 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36851
	I0719 18:38:05.476058   38383 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:05.476520   38383 main.go:141] libmachine: Using API Version  1
	I0719 18:38:05.476541   38383 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:05.476808   38383 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:05.476983   38383 main.go:141] libmachine: (ha-269108) Calling .GetIP
	I0719 18:38:05.479919   38383 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:38:05.480400   38383 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:38:05.480427   38383 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:38:05.480566   38383 host.go:66] Checking if "ha-269108" exists ...
	I0719 18:38:05.480854   38383 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:05.480896   38383 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:05.494823   38383 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40739
	I0719 18:38:05.495230   38383 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:05.495681   38383 main.go:141] libmachine: Using API Version  1
	I0719 18:38:05.495701   38383 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:05.495981   38383 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:05.496170   38383 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:38:05.496319   38383 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 18:38:05.496353   38383 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:38:05.498879   38383 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:38:05.499342   38383 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:38:05.499367   38383 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:38:05.499513   38383 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:38:05.499666   38383 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:38:05.499801   38383 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:38:05.500027   38383 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	I0719 18:38:05.574682   38383 ssh_runner.go:195] Run: systemctl --version
	I0719 18:38:05.580603   38383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 18:38:05.596805   38383 kubeconfig.go:125] found "ha-269108" server: "https://192.168.39.254:8443"
	I0719 18:38:05.596831   38383 api_server.go:166] Checking apiserver status ...
	I0719 18:38:05.596861   38383 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 18:38:05.610444   38383 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1198/cgroup
	W0719 18:38:05.619319   38383 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1198/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 18:38:05.619370   38383 ssh_runner.go:195] Run: ls
	I0719 18:38:05.623577   38383 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 18:38:05.631211   38383 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 18:38:05.631231   38383 status.go:422] ha-269108 apiserver status = Running (err=<nil>)
	I0719 18:38:05.631240   38383 status.go:257] ha-269108 status: &{Name:ha-269108 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 18:38:05.631258   38383 status.go:255] checking status of ha-269108-m02 ...
	I0719 18:38:05.631599   38383 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:05.631636   38383 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:05.648842   38383 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34171
	I0719 18:38:05.649244   38383 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:05.649817   38383 main.go:141] libmachine: Using API Version  1
	I0719 18:38:05.649831   38383 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:05.650133   38383 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:05.650335   38383 main.go:141] libmachine: (ha-269108-m02) Calling .GetState
	I0719 18:38:05.652036   38383 status.go:330] ha-269108-m02 host status = "Running" (err=<nil>)
	I0719 18:38:05.652050   38383 host.go:66] Checking if "ha-269108-m02" exists ...
	I0719 18:38:05.652316   38383 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:05.652346   38383 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:05.667397   38383 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34351
	I0719 18:38:05.667778   38383 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:05.668233   38383 main.go:141] libmachine: Using API Version  1
	I0719 18:38:05.668251   38383 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:05.668521   38383 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:05.668817   38383 main.go:141] libmachine: (ha-269108-m02) Calling .GetIP
	I0719 18:38:05.671563   38383 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:38:05.671985   38383 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:38:05.672008   38383 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:38:05.672207   38383 host.go:66] Checking if "ha-269108-m02" exists ...
	I0719 18:38:05.672597   38383 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:05.672639   38383 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:05.687351   38383 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45971
	I0719 18:38:05.687751   38383 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:05.688250   38383 main.go:141] libmachine: Using API Version  1
	I0719 18:38:05.688270   38383 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:05.688538   38383 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:05.688724   38383 main.go:141] libmachine: (ha-269108-m02) Calling .DriverName
	I0719 18:38:05.688876   38383 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 18:38:05.688897   38383 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHHostname
	I0719 18:38:05.691600   38383 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:38:05.692015   38383 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:38:05.692047   38383 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:38:05.692146   38383 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHPort
	I0719 18:38:05.692311   38383 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:38:05.692482   38383 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHUsername
	I0719 18:38:05.692618   38383 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m02/id_rsa Username:docker}
	W0719 18:38:08.772068   38383 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.87:22: connect: no route to host
	W0719 18:38:08.772166   38383 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.87:22: connect: no route to host
	E0719 18:38:08.772185   38383 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.87:22: connect: no route to host
	I0719 18:38:08.772192   38383 status.go:257] ha-269108-m02 status: &{Name:ha-269108-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0719 18:38:08.772208   38383 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.87:22: connect: no route to host
	I0719 18:38:08.772216   38383 status.go:255] checking status of ha-269108-m03 ...
	I0719 18:38:08.772580   38383 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:08.772628   38383 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:08.789138   38383 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33077
	I0719 18:38:08.789621   38383 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:08.790130   38383 main.go:141] libmachine: Using API Version  1
	I0719 18:38:08.790149   38383 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:08.790428   38383 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:08.790695   38383 main.go:141] libmachine: (ha-269108-m03) Calling .GetState
	I0719 18:38:08.792225   38383 status.go:330] ha-269108-m03 host status = "Running" (err=<nil>)
	I0719 18:38:08.792240   38383 host.go:66] Checking if "ha-269108-m03" exists ...
	I0719 18:38:08.792500   38383 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:08.792529   38383 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:08.808526   38383 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37831
	I0719 18:38:08.809018   38383 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:08.809480   38383 main.go:141] libmachine: Using API Version  1
	I0719 18:38:08.809504   38383 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:08.809819   38383 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:08.810006   38383 main.go:141] libmachine: (ha-269108-m03) Calling .GetIP
	I0719 18:38:08.812835   38383 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:38:08.813226   38383 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:38:08.813253   38383 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:38:08.813376   38383 host.go:66] Checking if "ha-269108-m03" exists ...
	I0719 18:38:08.813698   38383 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:08.813738   38383 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:08.828149   38383 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38217
	I0719 18:38:08.828608   38383 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:08.829203   38383 main.go:141] libmachine: Using API Version  1
	I0719 18:38:08.829226   38383 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:08.829554   38383 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:08.829706   38383 main.go:141] libmachine: (ha-269108-m03) Calling .DriverName
	I0719 18:38:08.829911   38383 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 18:38:08.829934   38383 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHHostname
	I0719 18:38:08.832635   38383 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:38:08.833041   38383 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:38:08.833064   38383 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:38:08.833211   38383 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHPort
	I0719 18:38:08.833446   38383 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:38:08.833586   38383 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHUsername
	I0719 18:38:08.833703   38383 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m03/id_rsa Username:docker}
	I0719 18:38:08.921414   38383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 18:38:08.938506   38383 kubeconfig.go:125] found "ha-269108" server: "https://192.168.39.254:8443"
	I0719 18:38:08.938533   38383 api_server.go:166] Checking apiserver status ...
	I0719 18:38:08.938562   38383 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 18:38:08.958168   38383 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1542/cgroup
	W0719 18:38:08.969658   38383 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1542/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 18:38:08.969705   38383 ssh_runner.go:195] Run: ls
	I0719 18:38:08.974360   38383 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 18:38:08.978580   38383 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 18:38:08.978601   38383 status.go:422] ha-269108-m03 apiserver status = Running (err=<nil>)
	I0719 18:38:08.978609   38383 status.go:257] ha-269108-m03 status: &{Name:ha-269108-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 18:38:08.978624   38383 status.go:255] checking status of ha-269108-m04 ...
	I0719 18:38:08.978906   38383 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:08.978938   38383 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:08.995021   38383 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36695
	I0719 18:38:08.995441   38383 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:08.995912   38383 main.go:141] libmachine: Using API Version  1
	I0719 18:38:08.995940   38383 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:08.996288   38383 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:08.996476   38383 main.go:141] libmachine: (ha-269108-m04) Calling .GetState
	I0719 18:38:08.997843   38383 status.go:330] ha-269108-m04 host status = "Running" (err=<nil>)
	I0719 18:38:08.997857   38383 host.go:66] Checking if "ha-269108-m04" exists ...
	I0719 18:38:08.998119   38383 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:08.998152   38383 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:09.012823   38383 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44881
	I0719 18:38:09.013272   38383 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:09.013684   38383 main.go:141] libmachine: Using API Version  1
	I0719 18:38:09.013703   38383 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:09.014010   38383 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:09.014215   38383 main.go:141] libmachine: (ha-269108-m04) Calling .GetIP
	I0719 18:38:09.016736   38383 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:38:09.017151   38383 main.go:141] libmachine: (ha-269108-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:c4:15", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:34:19 +0000 UTC Type:0 Mac:52:54:00:44:c4:15 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-269108-m04 Clientid:01:52:54:00:44:c4:15}
	I0719 18:38:09.017182   38383 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:38:09.017342   38383 host.go:66] Checking if "ha-269108-m04" exists ...
	I0719 18:38:09.017634   38383 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:09.017678   38383 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:09.031874   38383 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35279
	I0719 18:38:09.032437   38383 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:09.032945   38383 main.go:141] libmachine: Using API Version  1
	I0719 18:38:09.032967   38383 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:09.033246   38383 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:09.033456   38383 main.go:141] libmachine: (ha-269108-m04) Calling .DriverName
	I0719 18:38:09.033619   38383 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 18:38:09.033658   38383 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHHostname
	I0719 18:38:09.036706   38383 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:38:09.037186   38383 main.go:141] libmachine: (ha-269108-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:c4:15", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:34:19 +0000 UTC Type:0 Mac:52:54:00:44:c4:15 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-269108-m04 Clientid:01:52:54:00:44:c4:15}
	I0719 18:38:09.037211   38383 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:38:09.037353   38383 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHPort
	I0719 18:38:09.037563   38383 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHKeyPath
	I0719 18:38:09.037756   38383 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHUsername
	I0719 18:38:09.037924   38383 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m04/id_rsa Username:docker}
	I0719 18:38:09.118950   38383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 18:38:09.132877   38383 status.go:257] ha-269108-m04 status: &{Name:ha-269108-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-269108 status -v=7 --alsologtostderr: exit status 3 (3.690963043s)

                                                
                                                
-- stdout --
	ha-269108
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-269108-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-269108-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-269108-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 18:38:12.477921   38484 out.go:291] Setting OutFile to fd 1 ...
	I0719 18:38:12.478042   38484 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:38:12.478051   38484 out.go:304] Setting ErrFile to fd 2...
	I0719 18:38:12.478056   38484 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:38:12.478239   38484 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 18:38:12.478394   38484 out.go:298] Setting JSON to false
	I0719 18:38:12.478429   38484 mustload.go:65] Loading cluster: ha-269108
	I0719 18:38:12.478469   38484 notify.go:220] Checking for updates...
	I0719 18:38:12.478957   38484 config.go:182] Loaded profile config "ha-269108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:38:12.478977   38484 status.go:255] checking status of ha-269108 ...
	I0719 18:38:12.479402   38484 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:12.479440   38484 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:12.498169   38484 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43835
	I0719 18:38:12.498533   38484 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:12.499067   38484 main.go:141] libmachine: Using API Version  1
	I0719 18:38:12.499095   38484 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:12.499411   38484 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:12.499578   38484 main.go:141] libmachine: (ha-269108) Calling .GetState
	I0719 18:38:12.501233   38484 status.go:330] ha-269108 host status = "Running" (err=<nil>)
	I0719 18:38:12.501246   38484 host.go:66] Checking if "ha-269108" exists ...
	I0719 18:38:12.501491   38484 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:12.501526   38484 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:12.515248   38484 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37119
	I0719 18:38:12.515574   38484 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:12.515979   38484 main.go:141] libmachine: Using API Version  1
	I0719 18:38:12.515998   38484 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:12.516326   38484 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:12.516490   38484 main.go:141] libmachine: (ha-269108) Calling .GetIP
	I0719 18:38:12.519148   38484 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:38:12.519588   38484 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:38:12.519631   38484 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:38:12.519704   38484 host.go:66] Checking if "ha-269108" exists ...
	I0719 18:38:12.520031   38484 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:12.520067   38484 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:12.533587   38484 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42217
	I0719 18:38:12.533927   38484 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:12.534497   38484 main.go:141] libmachine: Using API Version  1
	I0719 18:38:12.534524   38484 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:12.534825   38484 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:12.535042   38484 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:38:12.535302   38484 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 18:38:12.535337   38484 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:38:12.538204   38484 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:38:12.538613   38484 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:38:12.538641   38484 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:38:12.538821   38484 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:38:12.538999   38484 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:38:12.539139   38484 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:38:12.539249   38484 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	I0719 18:38:12.614969   38484 ssh_runner.go:195] Run: systemctl --version
	I0719 18:38:12.620801   38484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 18:38:12.635098   38484 kubeconfig.go:125] found "ha-269108" server: "https://192.168.39.254:8443"
	I0719 18:38:12.635123   38484 api_server.go:166] Checking apiserver status ...
	I0719 18:38:12.635160   38484 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 18:38:12.648817   38484 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1198/cgroup
	W0719 18:38:12.657819   38484 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1198/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 18:38:12.657861   38484 ssh_runner.go:195] Run: ls
	I0719 18:38:12.661844   38484 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 18:38:12.665988   38484 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 18:38:12.666010   38484 status.go:422] ha-269108 apiserver status = Running (err=<nil>)
	I0719 18:38:12.666022   38484 status.go:257] ha-269108 status: &{Name:ha-269108 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 18:38:12.666040   38484 status.go:255] checking status of ha-269108-m02 ...
	I0719 18:38:12.666592   38484 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:12.666641   38484 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:12.682035   38484 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42803
	I0719 18:38:12.682514   38484 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:12.683050   38484 main.go:141] libmachine: Using API Version  1
	I0719 18:38:12.683072   38484 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:12.683358   38484 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:12.683544   38484 main.go:141] libmachine: (ha-269108-m02) Calling .GetState
	I0719 18:38:12.685023   38484 status.go:330] ha-269108-m02 host status = "Running" (err=<nil>)
	I0719 18:38:12.685040   38484 host.go:66] Checking if "ha-269108-m02" exists ...
	I0719 18:38:12.685299   38484 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:12.685333   38484 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:12.699289   38484 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45439
	I0719 18:38:12.699695   38484 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:12.700119   38484 main.go:141] libmachine: Using API Version  1
	I0719 18:38:12.700140   38484 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:12.700518   38484 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:12.700698   38484 main.go:141] libmachine: (ha-269108-m02) Calling .GetIP
	I0719 18:38:12.703421   38484 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:38:12.703796   38484 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:38:12.703823   38484 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:38:12.703960   38484 host.go:66] Checking if "ha-269108-m02" exists ...
	I0719 18:38:12.704279   38484 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:12.704313   38484 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:12.718486   38484 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37581
	I0719 18:38:12.718871   38484 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:12.719410   38484 main.go:141] libmachine: Using API Version  1
	I0719 18:38:12.719443   38484 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:12.719734   38484 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:12.719951   38484 main.go:141] libmachine: (ha-269108-m02) Calling .DriverName
	I0719 18:38:12.720122   38484 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 18:38:12.720141   38484 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHHostname
	I0719 18:38:12.722715   38484 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:38:12.723071   38484 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:38:12.723096   38484 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:38:12.723230   38484 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHPort
	I0719 18:38:12.723386   38484 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:38:12.723507   38484 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHUsername
	I0719 18:38:12.723627   38484 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m02/id_rsa Username:docker}
	W0719 18:38:15.780094   38484 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.87:22: connect: no route to host
	W0719 18:38:15.780210   38484 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.87:22: connect: no route to host
	E0719 18:38:15.780233   38484 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.87:22: connect: no route to host
	I0719 18:38:15.780245   38484 status.go:257] ha-269108-m02 status: &{Name:ha-269108-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0719 18:38:15.780278   38484 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.87:22: connect: no route to host
	I0719 18:38:15.780293   38484 status.go:255] checking status of ha-269108-m03 ...
	I0719 18:38:15.780624   38484 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:15.780681   38484 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:15.794956   38484 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34117
	I0719 18:38:15.795441   38484 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:15.795930   38484 main.go:141] libmachine: Using API Version  1
	I0719 18:38:15.795956   38484 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:15.796226   38484 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:15.796408   38484 main.go:141] libmachine: (ha-269108-m03) Calling .GetState
	I0719 18:38:15.798188   38484 status.go:330] ha-269108-m03 host status = "Running" (err=<nil>)
	I0719 18:38:15.798207   38484 host.go:66] Checking if "ha-269108-m03" exists ...
	I0719 18:38:15.798494   38484 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:15.798532   38484 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:15.812591   38484 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40143
	I0719 18:38:15.812950   38484 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:15.813361   38484 main.go:141] libmachine: Using API Version  1
	I0719 18:38:15.813379   38484 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:15.813691   38484 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:15.813885   38484 main.go:141] libmachine: (ha-269108-m03) Calling .GetIP
	I0719 18:38:15.816355   38484 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:38:15.816727   38484 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:38:15.816761   38484 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:38:15.816845   38484 host.go:66] Checking if "ha-269108-m03" exists ...
	I0719 18:38:15.817152   38484 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:15.817184   38484 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:15.831013   38484 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35935
	I0719 18:38:15.831439   38484 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:15.831926   38484 main.go:141] libmachine: Using API Version  1
	I0719 18:38:15.831953   38484 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:15.832242   38484 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:15.832438   38484 main.go:141] libmachine: (ha-269108-m03) Calling .DriverName
	I0719 18:38:15.832644   38484 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 18:38:15.832668   38484 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHHostname
	I0719 18:38:15.835078   38484 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:38:15.835582   38484 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:38:15.835605   38484 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:38:15.835778   38484 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHPort
	I0719 18:38:15.836000   38484 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:38:15.836157   38484 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHUsername
	I0719 18:38:15.836279   38484 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m03/id_rsa Username:docker}
	I0719 18:38:15.919517   38484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 18:38:15.932899   38484 kubeconfig.go:125] found "ha-269108" server: "https://192.168.39.254:8443"
	I0719 18:38:15.932924   38484 api_server.go:166] Checking apiserver status ...
	I0719 18:38:15.932952   38484 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 18:38:15.945810   38484 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1542/cgroup
	W0719 18:38:15.959138   38484 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1542/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 18:38:15.959192   38484 ssh_runner.go:195] Run: ls
	I0719 18:38:15.964099   38484 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 18:38:15.969963   38484 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 18:38:15.969988   38484 status.go:422] ha-269108-m03 apiserver status = Running (err=<nil>)
	I0719 18:38:15.969999   38484 status.go:257] ha-269108-m03 status: &{Name:ha-269108-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 18:38:15.970018   38484 status.go:255] checking status of ha-269108-m04 ...
	I0719 18:38:15.970375   38484 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:15.970411   38484 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:15.989931   38484 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33807
	I0719 18:38:15.990355   38484 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:15.990827   38484 main.go:141] libmachine: Using API Version  1
	I0719 18:38:15.990845   38484 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:15.991093   38484 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:15.991240   38484 main.go:141] libmachine: (ha-269108-m04) Calling .GetState
	I0719 18:38:15.992749   38484 status.go:330] ha-269108-m04 host status = "Running" (err=<nil>)
	I0719 18:38:15.992765   38484 host.go:66] Checking if "ha-269108-m04" exists ...
	I0719 18:38:15.993123   38484 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:15.993164   38484 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:16.008068   38484 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41411
	I0719 18:38:16.008468   38484 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:16.008874   38484 main.go:141] libmachine: Using API Version  1
	I0719 18:38:16.008895   38484 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:16.009174   38484 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:16.009338   38484 main.go:141] libmachine: (ha-269108-m04) Calling .GetIP
	I0719 18:38:16.012072   38484 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:38:16.012492   38484 main.go:141] libmachine: (ha-269108-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:c4:15", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:34:19 +0000 UTC Type:0 Mac:52:54:00:44:c4:15 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-269108-m04 Clientid:01:52:54:00:44:c4:15}
	I0719 18:38:16.012527   38484 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:38:16.012659   38484 host.go:66] Checking if "ha-269108-m04" exists ...
	I0719 18:38:16.012934   38484 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:16.012986   38484 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:16.027332   38484 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45407
	I0719 18:38:16.027753   38484 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:16.028259   38484 main.go:141] libmachine: Using API Version  1
	I0719 18:38:16.028279   38484 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:16.028564   38484 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:16.028757   38484 main.go:141] libmachine: (ha-269108-m04) Calling .DriverName
	I0719 18:38:16.028956   38484 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 18:38:16.028977   38484 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHHostname
	I0719 18:38:16.031658   38484 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:38:16.032045   38484 main.go:141] libmachine: (ha-269108-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:c4:15", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:34:19 +0000 UTC Type:0 Mac:52:54:00:44:c4:15 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-269108-m04 Clientid:01:52:54:00:44:c4:15}
	I0719 18:38:16.032069   38484 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:38:16.032201   38484 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHPort
	I0719 18:38:16.032345   38484 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHKeyPath
	I0719 18:38:16.032514   38484 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHUsername
	I0719 18:38:16.032734   38484 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m04/id_rsa Username:docker}
	I0719 18:38:16.114761   38484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 18:38:16.128100   38484 status.go:257] ha-269108-m04 status: &{Name:ha-269108-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-269108 status -v=7 --alsologtostderr: exit status 7 (606.140582ms)

                                                
                                                
-- stdout --
	ha-269108
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-269108-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-269108-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-269108-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 18:38:21.181310   38618 out.go:291] Setting OutFile to fd 1 ...
	I0719 18:38:21.181427   38618 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:38:21.181437   38618 out.go:304] Setting ErrFile to fd 2...
	I0719 18:38:21.181442   38618 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:38:21.181689   38618 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 18:38:21.181885   38618 out.go:298] Setting JSON to false
	I0719 18:38:21.181921   38618 mustload.go:65] Loading cluster: ha-269108
	I0719 18:38:21.182061   38618 notify.go:220] Checking for updates...
	I0719 18:38:21.182448   38618 config.go:182] Loaded profile config "ha-269108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:38:21.182467   38618 status.go:255] checking status of ha-269108 ...
	I0719 18:38:21.182935   38618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:21.182975   38618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:21.201755   38618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39195
	I0719 18:38:21.202181   38618 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:21.202668   38618 main.go:141] libmachine: Using API Version  1
	I0719 18:38:21.202712   38618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:21.203042   38618 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:21.203269   38618 main.go:141] libmachine: (ha-269108) Calling .GetState
	I0719 18:38:21.205093   38618 status.go:330] ha-269108 host status = "Running" (err=<nil>)
	I0719 18:38:21.205110   38618 host.go:66] Checking if "ha-269108" exists ...
	I0719 18:38:21.205396   38618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:21.205439   38618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:21.220445   38618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34355
	I0719 18:38:21.220836   38618 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:21.221394   38618 main.go:141] libmachine: Using API Version  1
	I0719 18:38:21.221418   38618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:21.221711   38618 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:21.221905   38618 main.go:141] libmachine: (ha-269108) Calling .GetIP
	I0719 18:38:21.224942   38618 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:38:21.225296   38618 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:38:21.225331   38618 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:38:21.225372   38618 host.go:66] Checking if "ha-269108" exists ...
	I0719 18:38:21.225653   38618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:21.225687   38618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:21.241292   38618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40857
	I0719 18:38:21.241731   38618 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:21.242173   38618 main.go:141] libmachine: Using API Version  1
	I0719 18:38:21.242193   38618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:21.242535   38618 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:21.242694   38618 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:38:21.242872   38618 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 18:38:21.242894   38618 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:38:21.245576   38618 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:38:21.246015   38618 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:38:21.246045   38618 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:38:21.246185   38618 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:38:21.246340   38618 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:38:21.246489   38618 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:38:21.246635   38618 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	I0719 18:38:21.335704   38618 ssh_runner.go:195] Run: systemctl --version
	I0719 18:38:21.341613   38618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 18:38:21.357523   38618 kubeconfig.go:125] found "ha-269108" server: "https://192.168.39.254:8443"
	I0719 18:38:21.357549   38618 api_server.go:166] Checking apiserver status ...
	I0719 18:38:21.357589   38618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 18:38:21.371620   38618 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1198/cgroup
	W0719 18:38:21.380033   38618 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1198/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 18:38:21.380080   38618 ssh_runner.go:195] Run: ls
	I0719 18:38:21.384453   38618 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 18:38:21.388611   38618 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 18:38:21.388628   38618 status.go:422] ha-269108 apiserver status = Running (err=<nil>)
	I0719 18:38:21.388638   38618 status.go:257] ha-269108 status: &{Name:ha-269108 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 18:38:21.388652   38618 status.go:255] checking status of ha-269108-m02 ...
	I0719 18:38:21.388911   38618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:21.388940   38618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:21.404506   38618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45729
	I0719 18:38:21.404967   38618 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:21.405462   38618 main.go:141] libmachine: Using API Version  1
	I0719 18:38:21.405491   38618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:21.405790   38618 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:21.406011   38618 main.go:141] libmachine: (ha-269108-m02) Calling .GetState
	I0719 18:38:21.407658   38618 status.go:330] ha-269108-m02 host status = "Stopped" (err=<nil>)
	I0719 18:38:21.407684   38618 status.go:343] host is not running, skipping remaining checks
	I0719 18:38:21.407692   38618 status.go:257] ha-269108-m02 status: &{Name:ha-269108-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 18:38:21.407707   38618 status.go:255] checking status of ha-269108-m03 ...
	I0719 18:38:21.408079   38618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:21.408122   38618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:21.422302   38618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41735
	I0719 18:38:21.422704   38618 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:21.423108   38618 main.go:141] libmachine: Using API Version  1
	I0719 18:38:21.423127   38618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:21.423431   38618 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:21.423599   38618 main.go:141] libmachine: (ha-269108-m03) Calling .GetState
	I0719 18:38:21.425191   38618 status.go:330] ha-269108-m03 host status = "Running" (err=<nil>)
	I0719 18:38:21.425214   38618 host.go:66] Checking if "ha-269108-m03" exists ...
	I0719 18:38:21.425528   38618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:21.425565   38618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:21.439730   38618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37877
	I0719 18:38:21.440192   38618 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:21.440613   38618 main.go:141] libmachine: Using API Version  1
	I0719 18:38:21.440632   38618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:21.440913   38618 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:21.441099   38618 main.go:141] libmachine: (ha-269108-m03) Calling .GetIP
	I0719 18:38:21.443735   38618 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:38:21.444212   38618 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:38:21.444232   38618 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:38:21.444347   38618 host.go:66] Checking if "ha-269108-m03" exists ...
	I0719 18:38:21.444729   38618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:21.444767   38618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:21.458724   38618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42199
	I0719 18:38:21.459098   38618 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:21.459531   38618 main.go:141] libmachine: Using API Version  1
	I0719 18:38:21.459552   38618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:21.459868   38618 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:21.460049   38618 main.go:141] libmachine: (ha-269108-m03) Calling .DriverName
	I0719 18:38:21.460223   38618 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 18:38:21.460241   38618 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHHostname
	I0719 18:38:21.462902   38618 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:38:21.463321   38618 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:38:21.463352   38618 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:38:21.463481   38618 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHPort
	I0719 18:38:21.463640   38618 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:38:21.463794   38618 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHUsername
	I0719 18:38:21.463937   38618 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m03/id_rsa Username:docker}
	I0719 18:38:21.546994   38618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 18:38:21.561583   38618 kubeconfig.go:125] found "ha-269108" server: "https://192.168.39.254:8443"
	I0719 18:38:21.561612   38618 api_server.go:166] Checking apiserver status ...
	I0719 18:38:21.561648   38618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 18:38:21.573958   38618 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1542/cgroup
	W0719 18:38:21.582022   38618 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1542/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 18:38:21.582065   38618 ssh_runner.go:195] Run: ls
	I0719 18:38:21.586328   38618 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 18:38:21.590332   38618 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 18:38:21.590347   38618 status.go:422] ha-269108-m03 apiserver status = Running (err=<nil>)
	I0719 18:38:21.590355   38618 status.go:257] ha-269108-m03 status: &{Name:ha-269108-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 18:38:21.590380   38618 status.go:255] checking status of ha-269108-m04 ...
	I0719 18:38:21.590647   38618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:21.590678   38618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:21.605379   38618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33599
	I0719 18:38:21.605772   38618 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:21.606194   38618 main.go:141] libmachine: Using API Version  1
	I0719 18:38:21.606213   38618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:21.606584   38618 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:21.606780   38618 main.go:141] libmachine: (ha-269108-m04) Calling .GetState
	I0719 18:38:21.608423   38618 status.go:330] ha-269108-m04 host status = "Running" (err=<nil>)
	I0719 18:38:21.608439   38618 host.go:66] Checking if "ha-269108-m04" exists ...
	I0719 18:38:21.608711   38618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:21.608741   38618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:21.623275   38618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33543
	I0719 18:38:21.623649   38618 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:21.624056   38618 main.go:141] libmachine: Using API Version  1
	I0719 18:38:21.624076   38618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:21.624379   38618 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:21.624551   38618 main.go:141] libmachine: (ha-269108-m04) Calling .GetIP
	I0719 18:38:21.627268   38618 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:38:21.627739   38618 main.go:141] libmachine: (ha-269108-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:c4:15", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:34:19 +0000 UTC Type:0 Mac:52:54:00:44:c4:15 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-269108-m04 Clientid:01:52:54:00:44:c4:15}
	I0719 18:38:21.627765   38618 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:38:21.627946   38618 host.go:66] Checking if "ha-269108-m04" exists ...
	I0719 18:38:21.628257   38618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:21.628300   38618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:21.642299   38618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46875
	I0719 18:38:21.642642   38618 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:21.643146   38618 main.go:141] libmachine: Using API Version  1
	I0719 18:38:21.643171   38618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:21.643474   38618 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:21.643655   38618 main.go:141] libmachine: (ha-269108-m04) Calling .DriverName
	I0719 18:38:21.643816   38618 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 18:38:21.643850   38618 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHHostname
	I0719 18:38:21.646510   38618 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:38:21.646914   38618 main.go:141] libmachine: (ha-269108-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:c4:15", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:34:19 +0000 UTC Type:0 Mac:52:54:00:44:c4:15 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-269108-m04 Clientid:01:52:54:00:44:c4:15}
	I0719 18:38:21.646935   38618 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:38:21.647080   38618 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHPort
	I0719 18:38:21.647234   38618 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHKeyPath
	I0719 18:38:21.647401   38618 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHUsername
	I0719 18:38:21.647535   38618 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m04/id_rsa Username:docker}
	I0719 18:38:21.730762   38618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 18:38:21.746300   38618 status.go:257] ha-269108-m04 status: &{Name:ha-269108-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-269108 status -v=7 --alsologtostderr: exit status 7 (607.378801ms)

                                                
                                                
-- stdout --
	ha-269108
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-269108-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-269108-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-269108-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 18:38:35.989442   38739 out.go:291] Setting OutFile to fd 1 ...
	I0719 18:38:35.989534   38739 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:38:35.989539   38739 out.go:304] Setting ErrFile to fd 2...
	I0719 18:38:35.989543   38739 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:38:35.989703   38739 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 18:38:35.989847   38739 out.go:298] Setting JSON to false
	I0719 18:38:35.989871   38739 mustload.go:65] Loading cluster: ha-269108
	I0719 18:38:35.989919   38739 notify.go:220] Checking for updates...
	I0719 18:38:35.990207   38739 config.go:182] Loaded profile config "ha-269108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:38:35.990220   38739 status.go:255] checking status of ha-269108 ...
	I0719 18:38:35.990585   38739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:35.990635   38739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:36.009184   38739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34537
	I0719 18:38:36.009708   38739 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:36.010365   38739 main.go:141] libmachine: Using API Version  1
	I0719 18:38:36.010396   38739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:36.010807   38739 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:36.011040   38739 main.go:141] libmachine: (ha-269108) Calling .GetState
	I0719 18:38:36.012617   38739 status.go:330] ha-269108 host status = "Running" (err=<nil>)
	I0719 18:38:36.012633   38739 host.go:66] Checking if "ha-269108" exists ...
	I0719 18:38:36.012888   38739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:36.012920   38739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:36.029340   38739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33371
	I0719 18:38:36.029703   38739 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:36.030140   38739 main.go:141] libmachine: Using API Version  1
	I0719 18:38:36.030162   38739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:36.030539   38739 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:36.030777   38739 main.go:141] libmachine: (ha-269108) Calling .GetIP
	I0719 18:38:36.034060   38739 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:38:36.034538   38739 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:38:36.034568   38739 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:38:36.034720   38739 host.go:66] Checking if "ha-269108" exists ...
	I0719 18:38:36.035090   38739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:36.035128   38739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:36.050547   38739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41281
	I0719 18:38:36.050890   38739 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:36.051310   38739 main.go:141] libmachine: Using API Version  1
	I0719 18:38:36.051337   38739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:36.051661   38739 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:36.051855   38739 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:38:36.052038   38739 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 18:38:36.052087   38739 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:38:36.054559   38739 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:38:36.054922   38739 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:38:36.054955   38739 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:38:36.055085   38739 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:38:36.055262   38739 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:38:36.055405   38739 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:38:36.055550   38739 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	I0719 18:38:36.135677   38739 ssh_runner.go:195] Run: systemctl --version
	I0719 18:38:36.141583   38739 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 18:38:36.156328   38739 kubeconfig.go:125] found "ha-269108" server: "https://192.168.39.254:8443"
	I0719 18:38:36.156354   38739 api_server.go:166] Checking apiserver status ...
	I0719 18:38:36.156389   38739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 18:38:36.169865   38739 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1198/cgroup
	W0719 18:38:36.178519   38739 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1198/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 18:38:36.178573   38739 ssh_runner.go:195] Run: ls
	I0719 18:38:36.183825   38739 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 18:38:36.188491   38739 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 18:38:36.188515   38739 status.go:422] ha-269108 apiserver status = Running (err=<nil>)
	I0719 18:38:36.188534   38739 status.go:257] ha-269108 status: &{Name:ha-269108 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 18:38:36.188551   38739 status.go:255] checking status of ha-269108-m02 ...
	I0719 18:38:36.188928   38739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:36.188969   38739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:36.204134   38739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33655
	I0719 18:38:36.204609   38739 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:36.205100   38739 main.go:141] libmachine: Using API Version  1
	I0719 18:38:36.205117   38739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:36.205458   38739 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:36.205678   38739 main.go:141] libmachine: (ha-269108-m02) Calling .GetState
	I0719 18:38:36.207335   38739 status.go:330] ha-269108-m02 host status = "Stopped" (err=<nil>)
	I0719 18:38:36.207350   38739 status.go:343] host is not running, skipping remaining checks
	I0719 18:38:36.207357   38739 status.go:257] ha-269108-m02 status: &{Name:ha-269108-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 18:38:36.207375   38739 status.go:255] checking status of ha-269108-m03 ...
	I0719 18:38:36.207682   38739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:36.207725   38739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:36.224107   38739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45379
	I0719 18:38:36.224604   38739 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:36.225058   38739 main.go:141] libmachine: Using API Version  1
	I0719 18:38:36.225078   38739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:36.225424   38739 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:36.225590   38739 main.go:141] libmachine: (ha-269108-m03) Calling .GetState
	I0719 18:38:36.227242   38739 status.go:330] ha-269108-m03 host status = "Running" (err=<nil>)
	I0719 18:38:36.227257   38739 host.go:66] Checking if "ha-269108-m03" exists ...
	I0719 18:38:36.227526   38739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:36.227557   38739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:36.241872   38739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46651
	I0719 18:38:36.242305   38739 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:36.242810   38739 main.go:141] libmachine: Using API Version  1
	I0719 18:38:36.242830   38739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:36.243144   38739 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:36.243348   38739 main.go:141] libmachine: (ha-269108-m03) Calling .GetIP
	I0719 18:38:36.246371   38739 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:38:36.246813   38739 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:38:36.246834   38739 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:38:36.246957   38739 host.go:66] Checking if "ha-269108-m03" exists ...
	I0719 18:38:36.247249   38739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:36.247286   38739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:36.261694   38739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43729
	I0719 18:38:36.262124   38739 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:36.262714   38739 main.go:141] libmachine: Using API Version  1
	I0719 18:38:36.262728   38739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:36.263162   38739 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:36.263347   38739 main.go:141] libmachine: (ha-269108-m03) Calling .DriverName
	I0719 18:38:36.263543   38739 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 18:38:36.263566   38739 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHHostname
	I0719 18:38:36.266200   38739 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:38:36.266597   38739 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:38:36.266623   38739 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:38:36.266758   38739 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHPort
	I0719 18:38:36.266902   38739 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:38:36.266999   38739 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHUsername
	I0719 18:38:36.267150   38739 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m03/id_rsa Username:docker}
	I0719 18:38:36.351516   38739 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 18:38:36.366167   38739 kubeconfig.go:125] found "ha-269108" server: "https://192.168.39.254:8443"
	I0719 18:38:36.366192   38739 api_server.go:166] Checking apiserver status ...
	I0719 18:38:36.366223   38739 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 18:38:36.378572   38739 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1542/cgroup
	W0719 18:38:36.387555   38739 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1542/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 18:38:36.387621   38739 ssh_runner.go:195] Run: ls
	I0719 18:38:36.391967   38739 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 18:38:36.396150   38739 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 18:38:36.396173   38739 status.go:422] ha-269108-m03 apiserver status = Running (err=<nil>)
	I0719 18:38:36.396184   38739 status.go:257] ha-269108-m03 status: &{Name:ha-269108-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 18:38:36.396203   38739 status.go:255] checking status of ha-269108-m04 ...
	I0719 18:38:36.396479   38739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:36.396510   38739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:36.411119   38739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44623
	I0719 18:38:36.411550   38739 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:36.412121   38739 main.go:141] libmachine: Using API Version  1
	I0719 18:38:36.412156   38739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:36.412478   38739 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:36.412689   38739 main.go:141] libmachine: (ha-269108-m04) Calling .GetState
	I0719 18:38:36.414204   38739 status.go:330] ha-269108-m04 host status = "Running" (err=<nil>)
	I0719 18:38:36.414220   38739 host.go:66] Checking if "ha-269108-m04" exists ...
	I0719 18:38:36.414553   38739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:36.414586   38739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:36.429296   38739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46407
	I0719 18:38:36.429722   38739 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:36.430191   38739 main.go:141] libmachine: Using API Version  1
	I0719 18:38:36.430220   38739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:36.430548   38739 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:36.430739   38739 main.go:141] libmachine: (ha-269108-m04) Calling .GetIP
	I0719 18:38:36.433906   38739 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:38:36.434303   38739 main.go:141] libmachine: (ha-269108-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:c4:15", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:34:19 +0000 UTC Type:0 Mac:52:54:00:44:c4:15 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-269108-m04 Clientid:01:52:54:00:44:c4:15}
	I0719 18:38:36.434334   38739 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:38:36.434469   38739 host.go:66] Checking if "ha-269108-m04" exists ...
	I0719 18:38:36.434775   38739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:36.434808   38739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:36.449152   38739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44275
	I0719 18:38:36.449625   38739 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:36.450093   38739 main.go:141] libmachine: Using API Version  1
	I0719 18:38:36.450112   38739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:36.450411   38739 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:36.450597   38739 main.go:141] libmachine: (ha-269108-m04) Calling .DriverName
	I0719 18:38:36.450784   38739 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 18:38:36.450805   38739 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHHostname
	I0719 18:38:36.453643   38739 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:38:36.454040   38739 main.go:141] libmachine: (ha-269108-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:c4:15", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:34:19 +0000 UTC Type:0 Mac:52:54:00:44:c4:15 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-269108-m04 Clientid:01:52:54:00:44:c4:15}
	I0719 18:38:36.454070   38739 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:38:36.454201   38739 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHPort
	I0719 18:38:36.454398   38739 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHKeyPath
	I0719 18:38:36.454546   38739 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHUsername
	I0719 18:38:36.454701   38739 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m04/id_rsa Username:docker}
	I0719 18:38:36.538902   38739 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 18:38:36.554727   38739 status.go:257] ha-269108-m04 status: &{Name:ha-269108-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-269108 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-269108 -n ha-269108
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-269108 logs -n 25: (1.291510968s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-269108 ssh -n                                                                | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-269108 cp ha-269108-m03:/home/docker/cp-test.txt                             | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108:/home/docker/cp-test_ha-269108-m03_ha-269108.txt                      |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n                                                                | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n ha-269108 sudo cat                                             | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | /home/docker/cp-test_ha-269108-m03_ha-269108.txt                                |           |         |         |                     |                     |
	| cp      | ha-269108 cp ha-269108-m03:/home/docker/cp-test.txt                             | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m02:/home/docker/cp-test_ha-269108-m03_ha-269108-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n                                                                | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n ha-269108-m02 sudo cat                                         | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | /home/docker/cp-test_ha-269108-m03_ha-269108-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-269108 cp ha-269108-m03:/home/docker/cp-test.txt                             | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m04:/home/docker/cp-test_ha-269108-m03_ha-269108-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n                                                                | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n ha-269108-m04 sudo cat                                         | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | /home/docker/cp-test_ha-269108-m03_ha-269108-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-269108 cp testdata/cp-test.txt                                               | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n                                                                | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-269108 cp ha-269108-m04:/home/docker/cp-test.txt                             | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile835163817/001/cp-test_ha-269108-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n                                                                | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-269108 cp ha-269108-m04:/home/docker/cp-test.txt                             | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108:/home/docker/cp-test_ha-269108-m04_ha-269108.txt                      |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n                                                                | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n ha-269108 sudo cat                                             | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | /home/docker/cp-test_ha-269108-m04_ha-269108.txt                                |           |         |         |                     |                     |
	| cp      | ha-269108 cp ha-269108-m04:/home/docker/cp-test.txt                             | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m02:/home/docker/cp-test_ha-269108-m04_ha-269108-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n                                                                | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n ha-269108-m02 sudo cat                                         | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | /home/docker/cp-test_ha-269108-m04_ha-269108-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-269108 cp ha-269108-m04:/home/docker/cp-test.txt                             | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m03:/home/docker/cp-test_ha-269108-m04_ha-269108-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n                                                                | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n ha-269108-m03 sudo cat                                         | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | /home/docker/cp-test_ha-269108-m04_ha-269108-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-269108 node stop m02 -v=7                                                    | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-269108 node start m02 -v=7                                                   | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:37 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 18:30:31
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 18:30:31.420743   33150 out.go:291] Setting OutFile to fd 1 ...
	I0719 18:30:31.420877   33150 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:30:31.420887   33150 out.go:304] Setting ErrFile to fd 2...
	I0719 18:30:31.420893   33150 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:30:31.421086   33150 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 18:30:31.421655   33150 out.go:298] Setting JSON to false
	I0719 18:30:31.422488   33150 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4374,"bootTime":1721409457,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 18:30:31.422545   33150 start.go:139] virtualization: kvm guest
	I0719 18:30:31.424466   33150 out.go:177] * [ha-269108] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 18:30:31.425625   33150 notify.go:220] Checking for updates...
	I0719 18:30:31.425654   33150 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 18:30:31.426849   33150 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 18:30:31.427995   33150 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 18:30:31.429188   33150 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 18:30:31.430409   33150 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 18:30:31.431696   33150 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 18:30:31.433097   33150 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 18:30:31.466582   33150 out.go:177] * Using the kvm2 driver based on user configuration
	I0719 18:30:31.467578   33150 start.go:297] selected driver: kvm2
	I0719 18:30:31.467592   33150 start.go:901] validating driver "kvm2" against <nil>
	I0719 18:30:31.467605   33150 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 18:30:31.468277   33150 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 18:30:31.468355   33150 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19307-14841/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 18:30:31.483111   33150 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 18:30:31.483160   33150 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 18:30:31.483364   33150 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 18:30:31.483428   33150 cni.go:84] Creating CNI manager for ""
	I0719 18:30:31.483440   33150 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0719 18:30:31.483445   33150 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0719 18:30:31.483529   33150 start.go:340] cluster config:
	{Name:ha-269108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-269108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0719 18:30:31.483624   33150 iso.go:125] acquiring lock: {Name:mka126a3710ab8a115eb2a3299b735524430e5f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 18:30:31.485254   33150 out.go:177] * Starting "ha-269108" primary control-plane node in "ha-269108" cluster
	I0719 18:30:31.486231   33150 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 18:30:31.486260   33150 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 18:30:31.486268   33150 cache.go:56] Caching tarball of preloaded images
	I0719 18:30:31.486353   33150 preload.go:172] Found /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 18:30:31.486365   33150 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 18:30:31.486701   33150 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/config.json ...
	I0719 18:30:31.486730   33150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/config.json: {Name:mk289cdb38a1e07701b9aa5db837f263a01da0ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:30:31.486879   33150 start.go:360] acquireMachinesLock for ha-269108: {Name:mk45aeee801587237bb965fa260501e9fac6f233 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 18:30:31.486913   33150 start.go:364] duration metric: took 18.469µs to acquireMachinesLock for "ha-269108"
	I0719 18:30:31.486935   33150 start.go:93] Provisioning new machine with config: &{Name:ha-269108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-269108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 18:30:31.486990   33150 start.go:125] createHost starting for "" (driver="kvm2")
	I0719 18:30:31.488436   33150 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 18:30:31.488586   33150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:30:31.488625   33150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:30:31.502690   33150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36883
	I0719 18:30:31.503132   33150 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:30:31.503625   33150 main.go:141] libmachine: Using API Version  1
	I0719 18:30:31.503649   33150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:30:31.503997   33150 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:30:31.504137   33150 main.go:141] libmachine: (ha-269108) Calling .GetMachineName
	I0719 18:30:31.504279   33150 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:30:31.504420   33150 start.go:159] libmachine.API.Create for "ha-269108" (driver="kvm2")
	I0719 18:30:31.504442   33150 client.go:168] LocalClient.Create starting
	I0719 18:30:31.504471   33150 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem
	I0719 18:30:31.504500   33150 main.go:141] libmachine: Decoding PEM data...
	I0719 18:30:31.504512   33150 main.go:141] libmachine: Parsing certificate...
	I0719 18:30:31.504558   33150 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem
	I0719 18:30:31.504575   33150 main.go:141] libmachine: Decoding PEM data...
	I0719 18:30:31.504586   33150 main.go:141] libmachine: Parsing certificate...
	I0719 18:30:31.504600   33150 main.go:141] libmachine: Running pre-create checks...
	I0719 18:30:31.504608   33150 main.go:141] libmachine: (ha-269108) Calling .PreCreateCheck
	I0719 18:30:31.504898   33150 main.go:141] libmachine: (ha-269108) Calling .GetConfigRaw
	I0719 18:30:31.505239   33150 main.go:141] libmachine: Creating machine...
	I0719 18:30:31.505250   33150 main.go:141] libmachine: (ha-269108) Calling .Create
	I0719 18:30:31.505357   33150 main.go:141] libmachine: (ha-269108) Creating KVM machine...
	I0719 18:30:31.506439   33150 main.go:141] libmachine: (ha-269108) DBG | found existing default KVM network
	I0719 18:30:31.507078   33150 main.go:141] libmachine: (ha-269108) DBG | I0719 18:30:31.506961   33173 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0719 18:30:31.507122   33150 main.go:141] libmachine: (ha-269108) DBG | created network xml: 
	I0719 18:30:31.507137   33150 main.go:141] libmachine: (ha-269108) DBG | <network>
	I0719 18:30:31.507146   33150 main.go:141] libmachine: (ha-269108) DBG |   <name>mk-ha-269108</name>
	I0719 18:30:31.507169   33150 main.go:141] libmachine: (ha-269108) DBG |   <dns enable='no'/>
	I0719 18:30:31.507181   33150 main.go:141] libmachine: (ha-269108) DBG |   
	I0719 18:30:31.507189   33150 main.go:141] libmachine: (ha-269108) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0719 18:30:31.507199   33150 main.go:141] libmachine: (ha-269108) DBG |     <dhcp>
	I0719 18:30:31.507211   33150 main.go:141] libmachine: (ha-269108) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0719 18:30:31.507222   33150 main.go:141] libmachine: (ha-269108) DBG |     </dhcp>
	I0719 18:30:31.507241   33150 main.go:141] libmachine: (ha-269108) DBG |   </ip>
	I0719 18:30:31.507250   33150 main.go:141] libmachine: (ha-269108) DBG |   
	I0719 18:30:31.507259   33150 main.go:141] libmachine: (ha-269108) DBG | </network>
	I0719 18:30:31.507271   33150 main.go:141] libmachine: (ha-269108) DBG | 
	I0719 18:30:31.511990   33150 main.go:141] libmachine: (ha-269108) DBG | trying to create private KVM network mk-ha-269108 192.168.39.0/24...
	I0719 18:30:31.573689   33150 main.go:141] libmachine: (ha-269108) Setting up store path in /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108 ...
	I0719 18:30:31.573720   33150 main.go:141] libmachine: (ha-269108) Building disk image from file:///home/jenkins/minikube-integration/19307-14841/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 18:30:31.573733   33150 main.go:141] libmachine: (ha-269108) DBG | private KVM network mk-ha-269108 192.168.39.0/24 created
	I0719 18:30:31.573782   33150 main.go:141] libmachine: (ha-269108) DBG | I0719 18:30:31.573516   33173 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 18:30:31.573811   33150 main.go:141] libmachine: (ha-269108) Downloading /home/jenkins/minikube-integration/19307-14841/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19307-14841/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 18:30:31.802036   33150 main.go:141] libmachine: (ha-269108) DBG | I0719 18:30:31.801927   33173 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa...
	I0719 18:30:31.964243   33150 main.go:141] libmachine: (ha-269108) DBG | I0719 18:30:31.964084   33173 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/ha-269108.rawdisk...
	I0719 18:30:31.964271   33150 main.go:141] libmachine: (ha-269108) DBG | Writing magic tar header
	I0719 18:30:31.964362   33150 main.go:141] libmachine: (ha-269108) DBG | Writing SSH key tar header
	I0719 18:30:31.964404   33150 main.go:141] libmachine: (ha-269108) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108 (perms=drwx------)
	I0719 18:30:31.964421   33150 main.go:141] libmachine: (ha-269108) DBG | I0719 18:30:31.964199   33173 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108 ...
	I0719 18:30:31.964430   33150 main.go:141] libmachine: (ha-269108) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841/.minikube/machines (perms=drwxr-xr-x)
	I0719 18:30:31.964444   33150 main.go:141] libmachine: (ha-269108) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841/.minikube (perms=drwxr-xr-x)
	I0719 18:30:31.964450   33150 main.go:141] libmachine: (ha-269108) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841 (perms=drwxrwxr-x)
	I0719 18:30:31.964457   33150 main.go:141] libmachine: (ha-269108) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108
	I0719 18:30:31.964466   33150 main.go:141] libmachine: (ha-269108) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841/.minikube/machines
	I0719 18:30:31.964474   33150 main.go:141] libmachine: (ha-269108) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 18:30:31.964482   33150 main.go:141] libmachine: (ha-269108) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841
	I0719 18:30:31.964489   33150 main.go:141] libmachine: (ha-269108) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0719 18:30:31.964495   33150 main.go:141] libmachine: (ha-269108) DBG | Checking permissions on dir: /home/jenkins
	I0719 18:30:31.964502   33150 main.go:141] libmachine: (ha-269108) DBG | Checking permissions on dir: /home
	I0719 18:30:31.964508   33150 main.go:141] libmachine: (ha-269108) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0719 18:30:31.964521   33150 main.go:141] libmachine: (ha-269108) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0719 18:30:31.964529   33150 main.go:141] libmachine: (ha-269108) Creating domain...
	I0719 18:30:31.964534   33150 main.go:141] libmachine: (ha-269108) DBG | Skipping /home - not owner
	I0719 18:30:31.965495   33150 main.go:141] libmachine: (ha-269108) define libvirt domain using xml: 
	I0719 18:30:31.965508   33150 main.go:141] libmachine: (ha-269108) <domain type='kvm'>
	I0719 18:30:31.965513   33150 main.go:141] libmachine: (ha-269108)   <name>ha-269108</name>
	I0719 18:30:31.965518   33150 main.go:141] libmachine: (ha-269108)   <memory unit='MiB'>2200</memory>
	I0719 18:30:31.965523   33150 main.go:141] libmachine: (ha-269108)   <vcpu>2</vcpu>
	I0719 18:30:31.965527   33150 main.go:141] libmachine: (ha-269108)   <features>
	I0719 18:30:31.965532   33150 main.go:141] libmachine: (ha-269108)     <acpi/>
	I0719 18:30:31.965536   33150 main.go:141] libmachine: (ha-269108)     <apic/>
	I0719 18:30:31.965541   33150 main.go:141] libmachine: (ha-269108)     <pae/>
	I0719 18:30:31.965547   33150 main.go:141] libmachine: (ha-269108)     
	I0719 18:30:31.965552   33150 main.go:141] libmachine: (ha-269108)   </features>
	I0719 18:30:31.965561   33150 main.go:141] libmachine: (ha-269108)   <cpu mode='host-passthrough'>
	I0719 18:30:31.965565   33150 main.go:141] libmachine: (ha-269108)   
	I0719 18:30:31.965571   33150 main.go:141] libmachine: (ha-269108)   </cpu>
	I0719 18:30:31.965576   33150 main.go:141] libmachine: (ha-269108)   <os>
	I0719 18:30:31.965580   33150 main.go:141] libmachine: (ha-269108)     <type>hvm</type>
	I0719 18:30:31.965586   33150 main.go:141] libmachine: (ha-269108)     <boot dev='cdrom'/>
	I0719 18:30:31.965597   33150 main.go:141] libmachine: (ha-269108)     <boot dev='hd'/>
	I0719 18:30:31.965603   33150 main.go:141] libmachine: (ha-269108)     <bootmenu enable='no'/>
	I0719 18:30:31.965607   33150 main.go:141] libmachine: (ha-269108)   </os>
	I0719 18:30:31.965612   33150 main.go:141] libmachine: (ha-269108)   <devices>
	I0719 18:30:31.965619   33150 main.go:141] libmachine: (ha-269108)     <disk type='file' device='cdrom'>
	I0719 18:30:31.965626   33150 main.go:141] libmachine: (ha-269108)       <source file='/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/boot2docker.iso'/>
	I0719 18:30:31.965633   33150 main.go:141] libmachine: (ha-269108)       <target dev='hdc' bus='scsi'/>
	I0719 18:30:31.965639   33150 main.go:141] libmachine: (ha-269108)       <readonly/>
	I0719 18:30:31.965651   33150 main.go:141] libmachine: (ha-269108)     </disk>
	I0719 18:30:31.965663   33150 main.go:141] libmachine: (ha-269108)     <disk type='file' device='disk'>
	I0719 18:30:31.965674   33150 main.go:141] libmachine: (ha-269108)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0719 18:30:31.965687   33150 main.go:141] libmachine: (ha-269108)       <source file='/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/ha-269108.rawdisk'/>
	I0719 18:30:31.965694   33150 main.go:141] libmachine: (ha-269108)       <target dev='hda' bus='virtio'/>
	I0719 18:30:31.965699   33150 main.go:141] libmachine: (ha-269108)     </disk>
	I0719 18:30:31.965703   33150 main.go:141] libmachine: (ha-269108)     <interface type='network'>
	I0719 18:30:31.965711   33150 main.go:141] libmachine: (ha-269108)       <source network='mk-ha-269108'/>
	I0719 18:30:31.965716   33150 main.go:141] libmachine: (ha-269108)       <model type='virtio'/>
	I0719 18:30:31.965722   33150 main.go:141] libmachine: (ha-269108)     </interface>
	I0719 18:30:31.965728   33150 main.go:141] libmachine: (ha-269108)     <interface type='network'>
	I0719 18:30:31.965740   33150 main.go:141] libmachine: (ha-269108)       <source network='default'/>
	I0719 18:30:31.965752   33150 main.go:141] libmachine: (ha-269108)       <model type='virtio'/>
	I0719 18:30:31.965769   33150 main.go:141] libmachine: (ha-269108)     </interface>
	I0719 18:30:31.965779   33150 main.go:141] libmachine: (ha-269108)     <serial type='pty'>
	I0719 18:30:31.965784   33150 main.go:141] libmachine: (ha-269108)       <target port='0'/>
	I0719 18:30:31.965791   33150 main.go:141] libmachine: (ha-269108)     </serial>
	I0719 18:30:31.965796   33150 main.go:141] libmachine: (ha-269108)     <console type='pty'>
	I0719 18:30:31.965801   33150 main.go:141] libmachine: (ha-269108)       <target type='serial' port='0'/>
	I0719 18:30:31.965811   33150 main.go:141] libmachine: (ha-269108)     </console>
	I0719 18:30:31.965818   33150 main.go:141] libmachine: (ha-269108)     <rng model='virtio'>
	I0719 18:30:31.965825   33150 main.go:141] libmachine: (ha-269108)       <backend model='random'>/dev/random</backend>
	I0719 18:30:31.965831   33150 main.go:141] libmachine: (ha-269108)     </rng>
	I0719 18:30:31.965836   33150 main.go:141] libmachine: (ha-269108)     
	I0719 18:30:31.965842   33150 main.go:141] libmachine: (ha-269108)     
	I0719 18:30:31.965846   33150 main.go:141] libmachine: (ha-269108)   </devices>
	I0719 18:30:31.965856   33150 main.go:141] libmachine: (ha-269108) </domain>
	I0719 18:30:31.965862   33150 main.go:141] libmachine: (ha-269108) 
	I0719 18:30:31.969928   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:ae:31:4c in network default
	I0719 18:30:31.970473   33150 main.go:141] libmachine: (ha-269108) Ensuring networks are active...
	I0719 18:30:31.970490   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:31.971060   33150 main.go:141] libmachine: (ha-269108) Ensuring network default is active
	I0719 18:30:31.971367   33150 main.go:141] libmachine: (ha-269108) Ensuring network mk-ha-269108 is active
	I0719 18:30:31.971779   33150 main.go:141] libmachine: (ha-269108) Getting domain xml...
	I0719 18:30:31.972427   33150 main.go:141] libmachine: (ha-269108) Creating domain...
	I0719 18:30:33.130639   33150 main.go:141] libmachine: (ha-269108) Waiting to get IP...
	I0719 18:30:33.131510   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:33.131926   33150 main.go:141] libmachine: (ha-269108) DBG | unable to find current IP address of domain ha-269108 in network mk-ha-269108
	I0719 18:30:33.131947   33150 main.go:141] libmachine: (ha-269108) DBG | I0719 18:30:33.131911   33173 retry.go:31] will retry after 252.599588ms: waiting for machine to come up
	I0719 18:30:33.386424   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:33.386825   33150 main.go:141] libmachine: (ha-269108) DBG | unable to find current IP address of domain ha-269108 in network mk-ha-269108
	I0719 18:30:33.386854   33150 main.go:141] libmachine: (ha-269108) DBG | I0719 18:30:33.386776   33173 retry.go:31] will retry after 352.1385ms: waiting for machine to come up
	I0719 18:30:33.740021   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:33.740420   33150 main.go:141] libmachine: (ha-269108) DBG | unable to find current IP address of domain ha-269108 in network mk-ha-269108
	I0719 18:30:33.740449   33150 main.go:141] libmachine: (ha-269108) DBG | I0719 18:30:33.740367   33173 retry.go:31] will retry after 303.509496ms: waiting for machine to come up
	I0719 18:30:34.045706   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:34.046159   33150 main.go:141] libmachine: (ha-269108) DBG | unable to find current IP address of domain ha-269108 in network mk-ha-269108
	I0719 18:30:34.046206   33150 main.go:141] libmachine: (ha-269108) DBG | I0719 18:30:34.046109   33173 retry.go:31] will retry after 567.315ms: waiting for machine to come up
	I0719 18:30:34.614510   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:34.614895   33150 main.go:141] libmachine: (ha-269108) DBG | unable to find current IP address of domain ha-269108 in network mk-ha-269108
	I0719 18:30:34.614923   33150 main.go:141] libmachine: (ha-269108) DBG | I0719 18:30:34.614842   33173 retry.go:31] will retry after 539.769716ms: waiting for machine to come up
	I0719 18:30:35.156500   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:35.156841   33150 main.go:141] libmachine: (ha-269108) DBG | unable to find current IP address of domain ha-269108 in network mk-ha-269108
	I0719 18:30:35.156866   33150 main.go:141] libmachine: (ha-269108) DBG | I0719 18:30:35.156795   33173 retry.go:31] will retry after 784.992453ms: waiting for machine to come up
	I0719 18:30:35.943646   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:35.944122   33150 main.go:141] libmachine: (ha-269108) DBG | unable to find current IP address of domain ha-269108 in network mk-ha-269108
	I0719 18:30:35.944160   33150 main.go:141] libmachine: (ha-269108) DBG | I0719 18:30:35.944088   33173 retry.go:31] will retry after 823.565264ms: waiting for machine to come up
	I0719 18:30:36.768679   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:36.769041   33150 main.go:141] libmachine: (ha-269108) DBG | unable to find current IP address of domain ha-269108 in network mk-ha-269108
	I0719 18:30:36.769065   33150 main.go:141] libmachine: (ha-269108) DBG | I0719 18:30:36.769004   33173 retry.go:31] will retry after 985.011439ms: waiting for machine to come up
	I0719 18:30:37.754996   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:37.755650   33150 main.go:141] libmachine: (ha-269108) DBG | unable to find current IP address of domain ha-269108 in network mk-ha-269108
	I0719 18:30:37.755679   33150 main.go:141] libmachine: (ha-269108) DBG | I0719 18:30:37.755600   33173 retry.go:31] will retry after 1.801933856s: waiting for machine to come up
	I0719 18:30:39.559512   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:39.559930   33150 main.go:141] libmachine: (ha-269108) DBG | unable to find current IP address of domain ha-269108 in network mk-ha-269108
	I0719 18:30:39.559960   33150 main.go:141] libmachine: (ha-269108) DBG | I0719 18:30:39.559885   33173 retry.go:31] will retry after 1.622391207s: waiting for machine to come up
	I0719 18:30:41.184707   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:41.185100   33150 main.go:141] libmachine: (ha-269108) DBG | unable to find current IP address of domain ha-269108 in network mk-ha-269108
	I0719 18:30:41.185129   33150 main.go:141] libmachine: (ha-269108) DBG | I0719 18:30:41.185034   33173 retry.go:31] will retry after 2.072642534s: waiting for machine to come up
	I0719 18:30:43.260149   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:43.260515   33150 main.go:141] libmachine: (ha-269108) DBG | unable to find current IP address of domain ha-269108 in network mk-ha-269108
	I0719 18:30:43.260539   33150 main.go:141] libmachine: (ha-269108) DBG | I0719 18:30:43.260482   33173 retry.go:31] will retry after 2.274377543s: waiting for machine to come up
	I0719 18:30:45.537822   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:45.538132   33150 main.go:141] libmachine: (ha-269108) DBG | unable to find current IP address of domain ha-269108 in network mk-ha-269108
	I0719 18:30:45.538191   33150 main.go:141] libmachine: (ha-269108) DBG | I0719 18:30:45.538106   33173 retry.go:31] will retry after 4.101164733s: waiting for machine to come up
	I0719 18:30:49.642837   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:49.643211   33150 main.go:141] libmachine: (ha-269108) DBG | unable to find current IP address of domain ha-269108 in network mk-ha-269108
	I0719 18:30:49.643234   33150 main.go:141] libmachine: (ha-269108) DBG | I0719 18:30:49.643170   33173 retry.go:31] will retry after 5.540453029s: waiting for machine to come up
	I0719 18:30:55.187752   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:55.188145   33150 main.go:141] libmachine: (ha-269108) Found IP for machine: 192.168.39.163
	I0719 18:30:55.188167   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has current primary IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:55.188176   33150 main.go:141] libmachine: (ha-269108) Reserving static IP address...
	I0719 18:30:55.188602   33150 main.go:141] libmachine: (ha-269108) DBG | unable to find host DHCP lease matching {name: "ha-269108", mac: "52:54:00:0f:2d:bd", ip: "192.168.39.163"} in network mk-ha-269108
	I0719 18:30:55.258637   33150 main.go:141] libmachine: (ha-269108) DBG | Getting to WaitForSSH function...
	I0719 18:30:55.258667   33150 main.go:141] libmachine: (ha-269108) Reserved static IP address: 192.168.39.163
	I0719 18:30:55.258682   33150 main.go:141] libmachine: (ha-269108) Waiting for SSH to be available...
	I0719 18:30:55.261529   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:55.261925   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:30:55.261946   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:55.262112   33150 main.go:141] libmachine: (ha-269108) DBG | Using SSH client type: external
	I0719 18:30:55.262137   33150 main.go:141] libmachine: (ha-269108) DBG | Using SSH private key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa (-rw-------)
	I0719 18:30:55.262173   33150 main.go:141] libmachine: (ha-269108) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.163 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 18:30:55.262187   33150 main.go:141] libmachine: (ha-269108) DBG | About to run SSH command:
	I0719 18:30:55.262199   33150 main.go:141] libmachine: (ha-269108) DBG | exit 0
	I0719 18:30:55.383685   33150 main.go:141] libmachine: (ha-269108) DBG | SSH cmd err, output: <nil>: 
	I0719 18:30:55.383964   33150 main.go:141] libmachine: (ha-269108) KVM machine creation complete!
	I0719 18:30:55.384297   33150 main.go:141] libmachine: (ha-269108) Calling .GetConfigRaw
	I0719 18:30:55.384870   33150 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:30:55.385087   33150 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:30:55.385297   33150 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0719 18:30:55.385314   33150 main.go:141] libmachine: (ha-269108) Calling .GetState
	I0719 18:30:55.386580   33150 main.go:141] libmachine: Detecting operating system of created instance...
	I0719 18:30:55.386604   33150 main.go:141] libmachine: Waiting for SSH to be available...
	I0719 18:30:55.386610   33150 main.go:141] libmachine: Getting to WaitForSSH function...
	I0719 18:30:55.386616   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:30:55.389184   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:55.389721   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:30:55.389744   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:55.389925   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:30:55.390106   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:30:55.390255   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:30:55.390423   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:30:55.390598   33150 main.go:141] libmachine: Using SSH client type: native
	I0719 18:30:55.390794   33150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0719 18:30:55.390806   33150 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0719 18:30:55.486923   33150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 18:30:55.486947   33150 main.go:141] libmachine: Detecting the provisioner...
	I0719 18:30:55.486953   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:30:55.489905   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:55.490298   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:30:55.490322   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:55.490498   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:30:55.490686   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:30:55.490869   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:30:55.491030   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:30:55.491188   33150 main.go:141] libmachine: Using SSH client type: native
	I0719 18:30:55.491380   33150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0719 18:30:55.491395   33150 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0719 18:30:55.588352   33150 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0719 18:30:55.588454   33150 main.go:141] libmachine: found compatible host: buildroot
	I0719 18:30:55.588469   33150 main.go:141] libmachine: Provisioning with buildroot...
	I0719 18:30:55.588480   33150 main.go:141] libmachine: (ha-269108) Calling .GetMachineName
	I0719 18:30:55.588752   33150 buildroot.go:166] provisioning hostname "ha-269108"
	I0719 18:30:55.588774   33150 main.go:141] libmachine: (ha-269108) Calling .GetMachineName
	I0719 18:30:55.588974   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:30:55.591207   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:55.591532   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:30:55.591565   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:55.591786   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:30:55.591974   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:30:55.592166   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:30:55.592434   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:30:55.592710   33150 main.go:141] libmachine: Using SSH client type: native
	I0719 18:30:55.592927   33150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0719 18:30:55.592943   33150 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-269108 && echo "ha-269108" | sudo tee /etc/hostname
	I0719 18:30:55.705281   33150 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-269108
	
	I0719 18:30:55.705308   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:30:55.708401   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:55.708809   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:30:55.708849   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:55.709025   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:30:55.709223   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:30:55.709371   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:30:55.709613   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:30:55.709825   33150 main.go:141] libmachine: Using SSH client type: native
	I0719 18:30:55.709984   33150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0719 18:30:55.710000   33150 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-269108' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-269108/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-269108' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 18:30:55.817740   33150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 18:30:55.817771   33150 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 18:30:55.817821   33150 buildroot.go:174] setting up certificates
	I0719 18:30:55.817834   33150 provision.go:84] configureAuth start
	I0719 18:30:55.817852   33150 main.go:141] libmachine: (ha-269108) Calling .GetMachineName
	I0719 18:30:55.818142   33150 main.go:141] libmachine: (ha-269108) Calling .GetIP
	I0719 18:30:55.821007   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:55.821339   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:30:55.821359   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:55.821512   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:30:55.823749   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:55.824121   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:30:55.824140   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:55.824291   33150 provision.go:143] copyHostCerts
	I0719 18:30:55.824341   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 18:30:55.824375   33150 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 18:30:55.824391   33150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 18:30:55.824460   33150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 18:30:55.824547   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 18:30:55.824571   33150 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 18:30:55.824588   33150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 18:30:55.824630   33150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 18:30:55.824687   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 18:30:55.824711   33150 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 18:30:55.824720   33150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 18:30:55.824749   33150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 18:30:55.824810   33150 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.ha-269108 san=[127.0.0.1 192.168.39.163 ha-269108 localhost minikube]
	I0719 18:30:55.988123   33150 provision.go:177] copyRemoteCerts
	I0719 18:30:55.988178   33150 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 18:30:55.988209   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:30:55.990973   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:55.991349   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:30:55.991371   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:55.991622   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:30:55.991879   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:30:55.992043   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:30:55.992237   33150 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	I0719 18:30:56.068799   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0719 18:30:56.068859   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 18:30:56.090604   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0719 18:30:56.090660   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0719 18:30:56.112129   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0719 18:30:56.112203   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 18:30:56.133910   33150 provision.go:87] duration metric: took 316.058428ms to configureAuth
	I0719 18:30:56.133936   33150 buildroot.go:189] setting minikube options for container-runtime
	I0719 18:30:56.134132   33150 config.go:182] Loaded profile config "ha-269108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:30:56.134211   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:30:56.137205   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:56.137537   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:30:56.137557   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:56.137739   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:30:56.137968   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:30:56.138147   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:30:56.138292   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:30:56.138519   33150 main.go:141] libmachine: Using SSH client type: native
	I0719 18:30:56.138701   33150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0719 18:30:56.138718   33150 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 18:30:56.385670   33150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 18:30:56.385697   33150 main.go:141] libmachine: Checking connection to Docker...
	I0719 18:30:56.385720   33150 main.go:141] libmachine: (ha-269108) Calling .GetURL
	I0719 18:30:56.387044   33150 main.go:141] libmachine: (ha-269108) DBG | Using libvirt version 6000000
	I0719 18:30:56.389406   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:56.389747   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:30:56.389775   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:56.389938   33150 main.go:141] libmachine: Docker is up and running!
	I0719 18:30:56.389953   33150 main.go:141] libmachine: Reticulating splines...
	I0719 18:30:56.389959   33150 client.go:171] duration metric: took 24.88550781s to LocalClient.Create
	I0719 18:30:56.389981   33150 start.go:167] duration metric: took 24.885561275s to libmachine.API.Create "ha-269108"
	I0719 18:30:56.389990   33150 start.go:293] postStartSetup for "ha-269108" (driver="kvm2")
	I0719 18:30:56.389999   33150 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 18:30:56.390029   33150 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:30:56.390383   33150 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 18:30:56.390408   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:30:56.392414   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:56.392758   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:30:56.392777   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:56.392911   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:30:56.393126   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:30:56.393312   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:30:56.393470   33150 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	I0719 18:30:56.469332   33150 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 18:30:56.473167   33150 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 18:30:56.473193   33150 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 18:30:56.473248   33150 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 18:30:56.473313   33150 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 18:30:56.473323   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> /etc/ssl/certs/220282.pem
	I0719 18:30:56.473415   33150 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 18:30:56.481923   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 18:30:56.503527   33150 start.go:296] duration metric: took 113.525152ms for postStartSetup
	I0719 18:30:56.503578   33150 main.go:141] libmachine: (ha-269108) Calling .GetConfigRaw
	I0719 18:30:56.504146   33150 main.go:141] libmachine: (ha-269108) Calling .GetIP
	I0719 18:30:56.506944   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:56.507327   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:30:56.507353   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:56.507706   33150 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/config.json ...
	I0719 18:30:56.507900   33150 start.go:128] duration metric: took 25.020900922s to createHost
	I0719 18:30:56.507923   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:30:56.510126   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:56.510459   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:30:56.510482   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:56.510608   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:30:56.510801   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:30:56.510937   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:30:56.511051   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:30:56.511199   33150 main.go:141] libmachine: Using SSH client type: native
	I0719 18:30:56.511360   33150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0719 18:30:56.511376   33150 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 18:30:56.608287   33150 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721413856.588592402
	
	I0719 18:30:56.608330   33150 fix.go:216] guest clock: 1721413856.588592402
	I0719 18:30:56.608341   33150 fix.go:229] Guest: 2024-07-19 18:30:56.588592402 +0000 UTC Remote: 2024-07-19 18:30:56.5079122 +0000 UTC m=+25.120917110 (delta=80.680202ms)
	I0719 18:30:56.608371   33150 fix.go:200] guest clock delta is within tolerance: 80.680202ms
	I0719 18:30:56.608378   33150 start.go:83] releasing machines lock for "ha-269108", held for 25.121454089s
	I0719 18:30:56.608399   33150 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:30:56.608683   33150 main.go:141] libmachine: (ha-269108) Calling .GetIP
	I0719 18:30:56.611161   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:56.611464   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:30:56.611486   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:56.611682   33150 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:30:56.612173   33150 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:30:56.612349   33150 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:30:56.612447   33150 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 18:30:56.612496   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:30:56.612557   33150 ssh_runner.go:195] Run: cat /version.json
	I0719 18:30:56.612581   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:30:56.615000   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:56.615329   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:30:56.615360   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:56.615486   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:56.615519   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:30:56.615783   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:30:56.615949   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:30:56.615975   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:30:56.616002   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:56.616122   33150 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	I0719 18:30:56.616159   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:30:56.616329   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:30:56.616498   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:30:56.616649   33150 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	I0719 18:30:56.689030   33150 ssh_runner.go:195] Run: systemctl --version
	I0719 18:30:56.724548   33150 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 18:30:56.880613   33150 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 18:30:56.886666   33150 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 18:30:56.886742   33150 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 18:30:56.901684   33150 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 18:30:56.901707   33150 start.go:495] detecting cgroup driver to use...
	I0719 18:30:56.901774   33150 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 18:30:56.917945   33150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 18:30:56.932470   33150 docker.go:217] disabling cri-docker service (if available) ...
	I0719 18:30:56.932530   33150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 18:30:56.946517   33150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 18:30:56.959847   33150 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 18:30:57.070914   33150 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 18:30:57.211165   33150 docker.go:233] disabling docker service ...
	I0719 18:30:57.211241   33150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 18:30:57.225219   33150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 18:30:57.238602   33150 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 18:30:57.372703   33150 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 18:30:57.479211   33150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 18:30:57.494650   33150 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 18:30:57.512970   33150 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 18:30:57.513034   33150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:30:57.523939   33150 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 18:30:57.524027   33150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:30:57.534681   33150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:30:57.545541   33150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:30:57.556278   33150 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 18:30:57.567481   33150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:30:57.578179   33150 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:30:57.595082   33150 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:30:57.606021   33150 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 18:30:57.615980   33150 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 18:30:57.616025   33150 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 18:30:57.629267   33150 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 18:30:57.639288   33150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 18:30:57.755011   33150 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 18:30:57.882249   33150 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 18:30:57.882342   33150 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 18:30:57.886772   33150 start.go:563] Will wait 60s for crictl version
	I0719 18:30:57.886817   33150 ssh_runner.go:195] Run: which crictl
	I0719 18:30:57.890076   33150 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 18:30:57.927197   33150 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 18:30:57.927274   33150 ssh_runner.go:195] Run: crio --version
	I0719 18:30:57.953210   33150 ssh_runner.go:195] Run: crio --version
	I0719 18:30:57.980690   33150 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 18:30:57.982008   33150 main.go:141] libmachine: (ha-269108) Calling .GetIP
	I0719 18:30:57.984791   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:57.985146   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:30:57.985191   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:30:57.985319   33150 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 18:30:57.989005   33150 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 18:30:58.001238   33150 kubeadm.go:883] updating cluster {Name:ha-269108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-269108 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 18:30:58.001464   33150 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 18:30:58.001552   33150 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 18:30:58.030865   33150 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0719 18:30:58.030940   33150 ssh_runner.go:195] Run: which lz4
	I0719 18:30:58.034438   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0719 18:30:58.034519   33150 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 18:30:58.038297   33150 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 18:30:58.038341   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0719 18:30:59.268077   33150 crio.go:462] duration metric: took 1.233578451s to copy over tarball
	I0719 18:30:59.268156   33150 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 18:31:01.397481   33150 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.129301181s)
	I0719 18:31:01.397510   33150 crio.go:469] duration metric: took 2.129404065s to extract the tarball
	I0719 18:31:01.397519   33150 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 18:31:01.434652   33150 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 18:31:01.475962   33150 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 18:31:01.475983   33150 cache_images.go:84] Images are preloaded, skipping loading
	I0719 18:31:01.475991   33150 kubeadm.go:934] updating node { 192.168.39.163 8443 v1.30.3 crio true true} ...
	I0719 18:31:01.476089   33150 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-269108 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.163
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-269108 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 18:31:01.476158   33150 ssh_runner.go:195] Run: crio config
	I0719 18:31:01.518892   33150 cni.go:84] Creating CNI manager for ""
	I0719 18:31:01.518909   33150 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0719 18:31:01.518918   33150 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 18:31:01.518937   33150 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.163 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-269108 NodeName:ha-269108 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.163"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.163 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 18:31:01.519109   33150 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.163
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-269108"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.163
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.163"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 18:31:01.519137   33150 kube-vip.go:115] generating kube-vip config ...
	I0719 18:31:01.519182   33150 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0719 18:31:01.533875   33150 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0719 18:31:01.533979   33150 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0719 18:31:01.534035   33150 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 18:31:01.543190   33150 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 18:31:01.543244   33150 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0719 18:31:01.551877   33150 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0719 18:31:01.567026   33150 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 18:31:01.581462   33150 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0719 18:31:01.595661   33150 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0719 18:31:01.610032   33150 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0719 18:31:01.613479   33150 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 18:31:01.624196   33150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 18:31:01.751552   33150 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 18:31:01.768136   33150 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108 for IP: 192.168.39.163
	I0719 18:31:01.768159   33150 certs.go:194] generating shared ca certs ...
	I0719 18:31:01.768179   33150 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:31:01.768345   33150 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 18:31:01.768417   33150 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 18:31:01.768429   33150 certs.go:256] generating profile certs ...
	I0719 18:31:01.768494   33150 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/client.key
	I0719 18:31:01.768512   33150 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/client.crt with IP's: []
	I0719 18:31:01.850221   33150 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/client.crt ...
	I0719 18:31:01.850245   33150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/client.crt: {Name:mk1bbdb3d9eee875d6d8d38aa2a4774f7a83ec0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:31:01.850433   33150 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/client.key ...
	I0719 18:31:01.850448   33150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/client.key: {Name:mk6c8838f2c3df45dc84b178c31aa71d51583a4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:31:01.850549   33150 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key.ab4bef6d
	I0719 18:31:01.850564   33150 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt.ab4bef6d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.163 192.168.39.254]
	I0719 18:31:02.131129   33150 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt.ab4bef6d ...
	I0719 18:31:02.131159   33150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt.ab4bef6d: {Name:mka5712a99242119887a39a3dc4313da01e0ff4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:31:02.131327   33150 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key.ab4bef6d ...
	I0719 18:31:02.131343   33150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key.ab4bef6d: {Name:mk6a417aa50ec31953502174b7f9c41a2557c40c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:31:02.131450   33150 certs.go:381] copying /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt.ab4bef6d -> /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt
	I0719 18:31:02.131537   33150 certs.go:385] copying /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key.ab4bef6d -> /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key
	I0719 18:31:02.131594   33150 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.key
	I0719 18:31:02.131607   33150 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.crt with IP's: []
	I0719 18:31:02.188179   33150 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.crt ...
	I0719 18:31:02.188208   33150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.crt: {Name:mk8634b7706115d209192d55859676f11a271bae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:31:02.188383   33150 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.key ...
	I0719 18:31:02.188398   33150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.key: {Name:mk3396d6a2dd4b0e3c48be746c4b2c3be317c855 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:31:02.188486   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 18:31:02.188509   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0719 18:31:02.188524   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 18:31:02.188540   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 18:31:02.188554   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 18:31:02.188571   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 18:31:02.188588   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 18:31:02.188600   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 18:31:02.188649   33150 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 18:31:02.188682   33150 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 18:31:02.188691   33150 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 18:31:02.188711   33150 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 18:31:02.188732   33150 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 18:31:02.188750   33150 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 18:31:02.188789   33150 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 18:31:02.188830   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 18:31:02.188845   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem -> /usr/share/ca-certificates/22028.pem
	I0719 18:31:02.188857   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> /usr/share/ca-certificates/220282.pem
	I0719 18:31:02.189353   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 18:31:02.214973   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 18:31:02.240024   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 18:31:02.263329   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 18:31:02.285700   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0719 18:31:02.308070   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 18:31:02.330239   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 18:31:02.352293   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 18:31:02.374566   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 18:31:02.397173   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 18:31:02.421367   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 18:31:02.443990   33150 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 18:31:02.459603   33150 ssh_runner.go:195] Run: openssl version
	I0719 18:31:02.464978   33150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 18:31:02.475394   33150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 18:31:02.479504   33150 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 18:31:02.479560   33150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 18:31:02.485097   33150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 18:31:02.496863   33150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 18:31:02.508107   33150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 18:31:02.512260   33150 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 18:31:02.512311   33150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 18:31:02.517822   33150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 18:31:02.528933   33150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 18:31:02.540512   33150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 18:31:02.544670   33150 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 18:31:02.544711   33150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 18:31:02.549980   33150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 18:31:02.565760   33150 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 18:31:02.570861   33150 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 18:31:02.570913   33150 kubeadm.go:392] StartCluster: {Name:ha-269108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-269108 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 18:31:02.570992   33150 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 18:31:02.571040   33150 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 18:31:02.636018   33150 cri.go:89] found id: ""
	I0719 18:31:02.636085   33150 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 18:31:02.646956   33150 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 18:31:02.655969   33150 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 18:31:02.665048   33150 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 18:31:02.665062   33150 kubeadm.go:157] found existing configuration files:
	
	I0719 18:31:02.665105   33150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 18:31:02.673592   33150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 18:31:02.673665   33150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 18:31:02.682219   33150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 18:31:02.690457   33150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 18:31:02.690501   33150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 18:31:02.699219   33150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 18:31:02.707799   33150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 18:31:02.707864   33150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 18:31:02.716764   33150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 18:31:02.725066   33150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 18:31:02.725118   33150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 18:31:02.733678   33150 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 18:31:02.959424   33150 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 18:31:13.703075   33150 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0719 18:31:13.703157   33150 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 18:31:13.703231   33150 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 18:31:13.703346   33150 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 18:31:13.703453   33150 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 18:31:13.703541   33150 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 18:31:13.705067   33150 out.go:204]   - Generating certificates and keys ...
	I0719 18:31:13.705177   33150 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 18:31:13.705258   33150 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 18:31:13.705342   33150 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0719 18:31:13.705440   33150 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0719 18:31:13.705511   33150 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0719 18:31:13.705555   33150 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0719 18:31:13.705644   33150 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0719 18:31:13.705780   33150 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-269108 localhost] and IPs [192.168.39.163 127.0.0.1 ::1]
	I0719 18:31:13.705858   33150 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0719 18:31:13.706012   33150 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-269108 localhost] and IPs [192.168.39.163 127.0.0.1 ::1]
	I0719 18:31:13.706085   33150 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0719 18:31:13.706150   33150 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0719 18:31:13.706193   33150 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0719 18:31:13.706240   33150 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 18:31:13.706312   33150 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 18:31:13.706414   33150 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 18:31:13.706475   33150 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 18:31:13.706536   33150 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 18:31:13.706580   33150 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 18:31:13.706656   33150 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 18:31:13.706719   33150 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 18:31:13.708161   33150 out.go:204]   - Booting up control plane ...
	I0719 18:31:13.708234   33150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 18:31:13.708297   33150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 18:31:13.708352   33150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 18:31:13.708442   33150 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 18:31:13.708545   33150 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 18:31:13.708612   33150 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 18:31:13.708752   33150 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 18:31:13.708816   33150 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 18:31:13.708889   33150 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.465729ms
	I0719 18:31:13.708954   33150 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 18:31:13.709006   33150 kubeadm.go:310] [api-check] The API server is healthy after 6.042829217s
	I0719 18:31:13.709095   33150 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 18:31:13.709205   33150 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 18:31:13.709261   33150 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 18:31:13.709413   33150 kubeadm.go:310] [mark-control-plane] Marking the node ha-269108 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 18:31:13.709483   33150 kubeadm.go:310] [bootstrap-token] Using token: l73je0.11l4jtjctlpyqoil
	I0719 18:31:13.710722   33150 out.go:204]   - Configuring RBAC rules ...
	I0719 18:31:13.710826   33150 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 18:31:13.710909   33150 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 18:31:13.711062   33150 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 18:31:13.711206   33150 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 18:31:13.711338   33150 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 18:31:13.711417   33150 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 18:31:13.711519   33150 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 18:31:13.711562   33150 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 18:31:13.711601   33150 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 18:31:13.711607   33150 kubeadm.go:310] 
	I0719 18:31:13.711676   33150 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 18:31:13.711686   33150 kubeadm.go:310] 
	I0719 18:31:13.711790   33150 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 18:31:13.711798   33150 kubeadm.go:310] 
	I0719 18:31:13.711840   33150 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 18:31:13.711899   33150 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 18:31:13.711947   33150 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 18:31:13.711953   33150 kubeadm.go:310] 
	I0719 18:31:13.712000   33150 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 18:31:13.712005   33150 kubeadm.go:310] 
	I0719 18:31:13.712043   33150 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 18:31:13.712049   33150 kubeadm.go:310] 
	I0719 18:31:13.712098   33150 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 18:31:13.712161   33150 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 18:31:13.712218   33150 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 18:31:13.712223   33150 kubeadm.go:310] 
	I0719 18:31:13.712294   33150 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 18:31:13.712361   33150 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 18:31:13.712366   33150 kubeadm.go:310] 
	I0719 18:31:13.712441   33150 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token l73je0.11l4jtjctlpyqoil \
	I0719 18:31:13.712550   33150 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 \
	I0719 18:31:13.712582   33150 kubeadm.go:310] 	--control-plane 
	I0719 18:31:13.712591   33150 kubeadm.go:310] 
	I0719 18:31:13.712694   33150 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 18:31:13.712709   33150 kubeadm.go:310] 
	I0719 18:31:13.712830   33150 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token l73je0.11l4jtjctlpyqoil \
	I0719 18:31:13.712976   33150 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 
	I0719 18:31:13.712990   33150 cni.go:84] Creating CNI manager for ""
	I0719 18:31:13.712996   33150 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0719 18:31:13.714494   33150 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0719 18:31:13.715613   33150 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0719 18:31:13.720394   33150 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0719 18:31:13.720411   33150 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0719 18:31:13.738689   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0719 18:31:14.082910   33150 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 18:31:14.083025   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:14.083061   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-269108 minikube.k8s.io/updated_at=2024_07_19T18_31_14_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221 minikube.k8s.io/name=ha-269108 minikube.k8s.io/primary=true
	I0719 18:31:14.104467   33150 ops.go:34] apiserver oom_adj: -16
	I0719 18:31:14.204309   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:14.705035   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:15.204661   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:15.704547   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:16.205339   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:16.704865   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:17.204998   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:17.705161   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:18.204488   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:18.705313   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:19.205391   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:19.705056   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:20.204936   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:20.704455   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:21.204347   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:21.705091   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:22.204518   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:22.705204   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:23.205078   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:23.704886   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:24.204991   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:24.704860   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:25.204471   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:25.705201   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:26.205340   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:26.705092   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 18:31:26.800735   33150 kubeadm.go:1113] duration metric: took 12.717785097s to wait for elevateKubeSystemPrivileges
	I0719 18:31:26.800770   33150 kubeadm.go:394] duration metric: took 24.229860376s to StartCluster
	I0719 18:31:26.800791   33150 settings.go:142] acquiring lock: {Name:mkcf95271f2ac91a55c2f4ae0f8a0ccaa24777be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:31:26.800884   33150 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 18:31:26.801658   33150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/kubeconfig: {Name:mk0bbb5f0de0e816a151363ad4427bbf9de52f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:31:26.801887   33150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0719 18:31:26.801905   33150 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 18:31:26.801949   33150 addons.go:69] Setting storage-provisioner=true in profile "ha-269108"
	I0719 18:31:26.801979   33150 addons.go:234] Setting addon storage-provisioner=true in "ha-269108"
	I0719 18:31:26.801883   33150 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 18:31:26.802001   33150 addons.go:69] Setting default-storageclass=true in profile "ha-269108"
	I0719 18:31:26.802046   33150 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-269108"
	I0719 18:31:26.802092   33150 config.go:182] Loaded profile config "ha-269108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:31:26.802004   33150 start.go:241] waiting for startup goroutines ...
	I0719 18:31:26.802006   33150 host.go:66] Checking if "ha-269108" exists ...
	I0719 18:31:26.802460   33150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:31:26.802482   33150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:31:26.802497   33150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:31:26.802521   33150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:31:26.817752   33150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44105
	I0719 18:31:26.817982   33150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44245
	I0719 18:31:26.818243   33150 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:31:26.818472   33150 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:31:26.818784   33150 main.go:141] libmachine: Using API Version  1
	I0719 18:31:26.818800   33150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:31:26.818955   33150 main.go:141] libmachine: Using API Version  1
	I0719 18:31:26.818968   33150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:31:26.819194   33150 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:31:26.819282   33150 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:31:26.819440   33150 main.go:141] libmachine: (ha-269108) Calling .GetState
	I0719 18:31:26.819932   33150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:31:26.819977   33150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:31:26.821832   33150 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 18:31:26.822183   33150 kapi.go:59] client config for ha-269108: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/client.crt", KeyFile:"/home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/client.key", CAFile:"/home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 18:31:26.822766   33150 cert_rotation.go:137] Starting client certificate rotation controller
	I0719 18:31:26.822978   33150 addons.go:234] Setting addon default-storageclass=true in "ha-269108"
	I0719 18:31:26.823018   33150 host.go:66] Checking if "ha-269108" exists ...
	I0719 18:31:26.823350   33150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:31:26.823395   33150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:31:26.835506   33150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35743
	I0719 18:31:26.835953   33150 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:31:26.836451   33150 main.go:141] libmachine: Using API Version  1
	I0719 18:31:26.836471   33150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:31:26.836768   33150 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:31:26.836964   33150 main.go:141] libmachine: (ha-269108) Calling .GetState
	I0719 18:31:26.837934   33150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43837
	I0719 18:31:26.838378   33150 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:31:26.838884   33150 main.go:141] libmachine: Using API Version  1
	I0719 18:31:26.838901   33150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:31:26.838916   33150 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:31:26.839172   33150 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:31:26.839725   33150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:31:26.839769   33150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:31:26.841333   33150 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 18:31:26.843028   33150 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 18:31:26.843046   33150 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 18:31:26.843063   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:31:26.846242   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:31:26.846701   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:31:26.846734   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:31:26.846981   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:31:26.847191   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:31:26.847333   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:31:26.847495   33150 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	I0719 18:31:26.854341   33150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34269
	I0719 18:31:26.854744   33150 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:31:26.855200   33150 main.go:141] libmachine: Using API Version  1
	I0719 18:31:26.855223   33150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:31:26.855570   33150 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:31:26.855765   33150 main.go:141] libmachine: (ha-269108) Calling .GetState
	I0719 18:31:26.857239   33150 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:31:26.857436   33150 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 18:31:26.857450   33150 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 18:31:26.857464   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:31:26.860456   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:31:26.860893   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:31:26.860913   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:31:26.861143   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:31:26.861307   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:31:26.861466   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:31:26.861588   33150 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	I0719 18:31:26.891129   33150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0719 18:31:26.999097   33150 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 18:31:27.034517   33150 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 18:31:27.364114   33150 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0719 18:31:27.446502   33150 main.go:141] libmachine: Making call to close driver server
	I0719 18:31:27.446529   33150 main.go:141] libmachine: (ha-269108) Calling .Close
	I0719 18:31:27.446800   33150 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:31:27.446816   33150 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:31:27.446824   33150 main.go:141] libmachine: Making call to close driver server
	I0719 18:31:27.446832   33150 main.go:141] libmachine: (ha-269108) Calling .Close
	I0719 18:31:27.446841   33150 main.go:141] libmachine: (ha-269108) DBG | Closing plugin on server side
	I0719 18:31:27.447062   33150 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:31:27.447079   33150 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:31:27.447098   33150 main.go:141] libmachine: (ha-269108) DBG | Closing plugin on server side
	I0719 18:31:27.447186   33150 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0719 18:31:27.447200   33150 round_trippers.go:469] Request Headers:
	I0719 18:31:27.447209   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:31:27.447218   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:31:27.455421   33150 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0719 18:31:27.456174   33150 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0719 18:31:27.456193   33150 round_trippers.go:469] Request Headers:
	I0719 18:31:27.456204   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:31:27.456214   33150 round_trippers.go:473]     Content-Type: application/json
	I0719 18:31:27.456221   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:31:27.458593   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:31:27.458757   33150 main.go:141] libmachine: Making call to close driver server
	I0719 18:31:27.458774   33150 main.go:141] libmachine: (ha-269108) Calling .Close
	I0719 18:31:27.459122   33150 main.go:141] libmachine: (ha-269108) DBG | Closing plugin on server side
	I0719 18:31:27.459202   33150 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:31:27.459235   33150 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:31:27.630487   33150 main.go:141] libmachine: Making call to close driver server
	I0719 18:31:27.630514   33150 main.go:141] libmachine: (ha-269108) Calling .Close
	I0719 18:31:27.630831   33150 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:31:27.630849   33150 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:31:27.630859   33150 main.go:141] libmachine: Making call to close driver server
	I0719 18:31:27.630878   33150 main.go:141] libmachine: (ha-269108) Calling .Close
	I0719 18:31:27.631146   33150 main.go:141] libmachine: (ha-269108) DBG | Closing plugin on server side
	I0719 18:31:27.631207   33150 main.go:141] libmachine: Successfully made call to close driver server
	I0719 18:31:27.631223   33150 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 18:31:27.632974   33150 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0719 18:31:27.634463   33150 addons.go:510] duration metric: took 832.560429ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0719 18:31:27.634495   33150 start.go:246] waiting for cluster config update ...
	I0719 18:31:27.634509   33150 start.go:255] writing updated cluster config ...
	I0719 18:31:27.636309   33150 out.go:177] 
	I0719 18:31:27.637962   33150 config.go:182] Loaded profile config "ha-269108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:31:27.638027   33150 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/config.json ...
	I0719 18:31:27.639532   33150 out.go:177] * Starting "ha-269108-m02" control-plane node in "ha-269108" cluster
	I0719 18:31:27.640602   33150 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 18:31:27.640623   33150 cache.go:56] Caching tarball of preloaded images
	I0719 18:31:27.640714   33150 preload.go:172] Found /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 18:31:27.640725   33150 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 18:31:27.640803   33150 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/config.json ...
	I0719 18:31:27.640944   33150 start.go:360] acquireMachinesLock for ha-269108-m02: {Name:mk45aeee801587237bb965fa260501e9fac6f233 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 18:31:27.640981   33150 start.go:364] duration metric: took 20.161µs to acquireMachinesLock for "ha-269108-m02"
	I0719 18:31:27.641001   33150 start.go:93] Provisioning new machine with config: &{Name:ha-269108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-269108 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 18:31:27.641080   33150 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0719 18:31:27.642702   33150 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 18:31:27.642799   33150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:31:27.642831   33150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:31:27.658005   33150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40707
	I0719 18:31:27.658402   33150 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:31:27.658896   33150 main.go:141] libmachine: Using API Version  1
	I0719 18:31:27.658923   33150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:31:27.659304   33150 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:31:27.659463   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetMachineName
	I0719 18:31:27.659641   33150 main.go:141] libmachine: (ha-269108-m02) Calling .DriverName
	I0719 18:31:27.659799   33150 start.go:159] libmachine.API.Create for "ha-269108" (driver="kvm2")
	I0719 18:31:27.659827   33150 client.go:168] LocalClient.Create starting
	I0719 18:31:27.659868   33150 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem
	I0719 18:31:27.659909   33150 main.go:141] libmachine: Decoding PEM data...
	I0719 18:31:27.659925   33150 main.go:141] libmachine: Parsing certificate...
	I0719 18:31:27.659989   33150 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem
	I0719 18:31:27.660010   33150 main.go:141] libmachine: Decoding PEM data...
	I0719 18:31:27.660027   33150 main.go:141] libmachine: Parsing certificate...
	I0719 18:31:27.660063   33150 main.go:141] libmachine: Running pre-create checks...
	I0719 18:31:27.660074   33150 main.go:141] libmachine: (ha-269108-m02) Calling .PreCreateCheck
	I0719 18:31:27.660244   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetConfigRaw
	I0719 18:31:27.660673   33150 main.go:141] libmachine: Creating machine...
	I0719 18:31:27.660686   33150 main.go:141] libmachine: (ha-269108-m02) Calling .Create
	I0719 18:31:27.660848   33150 main.go:141] libmachine: (ha-269108-m02) Creating KVM machine...
	I0719 18:31:27.662015   33150 main.go:141] libmachine: (ha-269108-m02) DBG | found existing default KVM network
	I0719 18:31:27.662208   33150 main.go:141] libmachine: (ha-269108-m02) DBG | found existing private KVM network mk-ha-269108
	I0719 18:31:27.662377   33150 main.go:141] libmachine: (ha-269108-m02) Setting up store path in /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m02 ...
	I0719 18:31:27.662400   33150 main.go:141] libmachine: (ha-269108-m02) Building disk image from file:///home/jenkins/minikube-integration/19307-14841/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 18:31:27.662488   33150 main.go:141] libmachine: (ha-269108-m02) DBG | I0719 18:31:27.662362   33536 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 18:31:27.662549   33150 main.go:141] libmachine: (ha-269108-m02) Downloading /home/jenkins/minikube-integration/19307-14841/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19307-14841/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 18:31:27.878032   33150 main.go:141] libmachine: (ha-269108-m02) DBG | I0719 18:31:27.877904   33536 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m02/id_rsa...
	I0719 18:31:28.146109   33150 main.go:141] libmachine: (ha-269108-m02) DBG | I0719 18:31:28.146005   33536 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m02/ha-269108-m02.rawdisk...
	I0719 18:31:28.146137   33150 main.go:141] libmachine: (ha-269108-m02) DBG | Writing magic tar header
	I0719 18:31:28.146147   33150 main.go:141] libmachine: (ha-269108-m02) DBG | Writing SSH key tar header
	I0719 18:31:28.146156   33150 main.go:141] libmachine: (ha-269108-m02) DBG | I0719 18:31:28.146128   33536 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m02 ...
	I0719 18:31:28.146229   33150 main.go:141] libmachine: (ha-269108-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m02
	I0719 18:31:28.146254   33150 main.go:141] libmachine: (ha-269108-m02) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m02 (perms=drwx------)
	I0719 18:31:28.146269   33150 main.go:141] libmachine: (ha-269108-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841/.minikube/machines
	I0719 18:31:28.146284   33150 main.go:141] libmachine: (ha-269108-m02) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841/.minikube/machines (perms=drwxr-xr-x)
	I0719 18:31:28.146294   33150 main.go:141] libmachine: (ha-269108-m02) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841/.minikube (perms=drwxr-xr-x)
	I0719 18:31:28.146301   33150 main.go:141] libmachine: (ha-269108-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 18:31:28.146307   33150 main.go:141] libmachine: (ha-269108-m02) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841 (perms=drwxrwxr-x)
	I0719 18:31:28.146316   33150 main.go:141] libmachine: (ha-269108-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0719 18:31:28.146324   33150 main.go:141] libmachine: (ha-269108-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0719 18:31:28.146333   33150 main.go:141] libmachine: (ha-269108-m02) Creating domain...
	I0719 18:31:28.146352   33150 main.go:141] libmachine: (ha-269108-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841
	I0719 18:31:28.146368   33150 main.go:141] libmachine: (ha-269108-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0719 18:31:28.146396   33150 main.go:141] libmachine: (ha-269108-m02) DBG | Checking permissions on dir: /home/jenkins
	I0719 18:31:28.146421   33150 main.go:141] libmachine: (ha-269108-m02) DBG | Checking permissions on dir: /home
	I0719 18:31:28.146434   33150 main.go:141] libmachine: (ha-269108-m02) DBG | Skipping /home - not owner
	I0719 18:31:28.147556   33150 main.go:141] libmachine: (ha-269108-m02) define libvirt domain using xml: 
	I0719 18:31:28.147571   33150 main.go:141] libmachine: (ha-269108-m02) <domain type='kvm'>
	I0719 18:31:28.147577   33150 main.go:141] libmachine: (ha-269108-m02)   <name>ha-269108-m02</name>
	I0719 18:31:28.147582   33150 main.go:141] libmachine: (ha-269108-m02)   <memory unit='MiB'>2200</memory>
	I0719 18:31:28.147587   33150 main.go:141] libmachine: (ha-269108-m02)   <vcpu>2</vcpu>
	I0719 18:31:28.147591   33150 main.go:141] libmachine: (ha-269108-m02)   <features>
	I0719 18:31:28.147597   33150 main.go:141] libmachine: (ha-269108-m02)     <acpi/>
	I0719 18:31:28.147601   33150 main.go:141] libmachine: (ha-269108-m02)     <apic/>
	I0719 18:31:28.147606   33150 main.go:141] libmachine: (ha-269108-m02)     <pae/>
	I0719 18:31:28.147623   33150 main.go:141] libmachine: (ha-269108-m02)     
	I0719 18:31:28.147630   33150 main.go:141] libmachine: (ha-269108-m02)   </features>
	I0719 18:31:28.147640   33150 main.go:141] libmachine: (ha-269108-m02)   <cpu mode='host-passthrough'>
	I0719 18:31:28.147652   33150 main.go:141] libmachine: (ha-269108-m02)   
	I0719 18:31:28.147656   33150 main.go:141] libmachine: (ha-269108-m02)   </cpu>
	I0719 18:31:28.147661   33150 main.go:141] libmachine: (ha-269108-m02)   <os>
	I0719 18:31:28.147668   33150 main.go:141] libmachine: (ha-269108-m02)     <type>hvm</type>
	I0719 18:31:28.147674   33150 main.go:141] libmachine: (ha-269108-m02)     <boot dev='cdrom'/>
	I0719 18:31:28.147678   33150 main.go:141] libmachine: (ha-269108-m02)     <boot dev='hd'/>
	I0719 18:31:28.147686   33150 main.go:141] libmachine: (ha-269108-m02)     <bootmenu enable='no'/>
	I0719 18:31:28.147690   33150 main.go:141] libmachine: (ha-269108-m02)   </os>
	I0719 18:31:28.147696   33150 main.go:141] libmachine: (ha-269108-m02)   <devices>
	I0719 18:31:28.147706   33150 main.go:141] libmachine: (ha-269108-m02)     <disk type='file' device='cdrom'>
	I0719 18:31:28.147721   33150 main.go:141] libmachine: (ha-269108-m02)       <source file='/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m02/boot2docker.iso'/>
	I0719 18:31:28.147732   33150 main.go:141] libmachine: (ha-269108-m02)       <target dev='hdc' bus='scsi'/>
	I0719 18:31:28.147740   33150 main.go:141] libmachine: (ha-269108-m02)       <readonly/>
	I0719 18:31:28.147750   33150 main.go:141] libmachine: (ha-269108-m02)     </disk>
	I0719 18:31:28.147759   33150 main.go:141] libmachine: (ha-269108-m02)     <disk type='file' device='disk'>
	I0719 18:31:28.147767   33150 main.go:141] libmachine: (ha-269108-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0719 18:31:28.147775   33150 main.go:141] libmachine: (ha-269108-m02)       <source file='/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m02/ha-269108-m02.rawdisk'/>
	I0719 18:31:28.147781   33150 main.go:141] libmachine: (ha-269108-m02)       <target dev='hda' bus='virtio'/>
	I0719 18:31:28.147786   33150 main.go:141] libmachine: (ha-269108-m02)     </disk>
	I0719 18:31:28.147796   33150 main.go:141] libmachine: (ha-269108-m02)     <interface type='network'>
	I0719 18:31:28.147805   33150 main.go:141] libmachine: (ha-269108-m02)       <source network='mk-ha-269108'/>
	I0719 18:31:28.147817   33150 main.go:141] libmachine: (ha-269108-m02)       <model type='virtio'/>
	I0719 18:31:28.147827   33150 main.go:141] libmachine: (ha-269108-m02)     </interface>
	I0719 18:31:28.147851   33150 main.go:141] libmachine: (ha-269108-m02)     <interface type='network'>
	I0719 18:31:28.147864   33150 main.go:141] libmachine: (ha-269108-m02)       <source network='default'/>
	I0719 18:31:28.147878   33150 main.go:141] libmachine: (ha-269108-m02)       <model type='virtio'/>
	I0719 18:31:28.147911   33150 main.go:141] libmachine: (ha-269108-m02)     </interface>
	I0719 18:31:28.147950   33150 main.go:141] libmachine: (ha-269108-m02)     <serial type='pty'>
	I0719 18:31:28.147965   33150 main.go:141] libmachine: (ha-269108-m02)       <target port='0'/>
	I0719 18:31:28.147976   33150 main.go:141] libmachine: (ha-269108-m02)     </serial>
	I0719 18:31:28.147990   33150 main.go:141] libmachine: (ha-269108-m02)     <console type='pty'>
	I0719 18:31:28.148002   33150 main.go:141] libmachine: (ha-269108-m02)       <target type='serial' port='0'/>
	I0719 18:31:28.148014   33150 main.go:141] libmachine: (ha-269108-m02)     </console>
	I0719 18:31:28.148029   33150 main.go:141] libmachine: (ha-269108-m02)     <rng model='virtio'>
	I0719 18:31:28.148041   33150 main.go:141] libmachine: (ha-269108-m02)       <backend model='random'>/dev/random</backend>
	I0719 18:31:28.148058   33150 main.go:141] libmachine: (ha-269108-m02)     </rng>
	I0719 18:31:28.148070   33150 main.go:141] libmachine: (ha-269108-m02)     
	I0719 18:31:28.148079   33150 main.go:141] libmachine: (ha-269108-m02)     
	I0719 18:31:28.148090   33150 main.go:141] libmachine: (ha-269108-m02)   </devices>
	I0719 18:31:28.148112   33150 main.go:141] libmachine: (ha-269108-m02) </domain>
	I0719 18:31:28.148126   33150 main.go:141] libmachine: (ha-269108-m02) 
	I0719 18:31:28.154651   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:bb:2c:b7 in network default
	I0719 18:31:28.155229   33150 main.go:141] libmachine: (ha-269108-m02) Ensuring networks are active...
	I0719 18:31:28.155264   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:28.155978   33150 main.go:141] libmachine: (ha-269108-m02) Ensuring network default is active
	I0719 18:31:28.156294   33150 main.go:141] libmachine: (ha-269108-m02) Ensuring network mk-ha-269108 is active
	I0719 18:31:28.156620   33150 main.go:141] libmachine: (ha-269108-m02) Getting domain xml...
	I0719 18:31:28.157377   33150 main.go:141] libmachine: (ha-269108-m02) Creating domain...
	I0719 18:31:29.357082   33150 main.go:141] libmachine: (ha-269108-m02) Waiting to get IP...
	I0719 18:31:29.357995   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:29.358485   33150 main.go:141] libmachine: (ha-269108-m02) DBG | unable to find current IP address of domain ha-269108-m02 in network mk-ha-269108
	I0719 18:31:29.358537   33150 main.go:141] libmachine: (ha-269108-m02) DBG | I0719 18:31:29.358459   33536 retry.go:31] will retry after 204.478278ms: waiting for machine to come up
	I0719 18:31:29.564846   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:29.565160   33150 main.go:141] libmachine: (ha-269108-m02) DBG | unable to find current IP address of domain ha-269108-m02 in network mk-ha-269108
	I0719 18:31:29.565186   33150 main.go:141] libmachine: (ha-269108-m02) DBG | I0719 18:31:29.565131   33536 retry.go:31] will retry after 234.726239ms: waiting for machine to come up
	I0719 18:31:29.801500   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:29.801930   33150 main.go:141] libmachine: (ha-269108-m02) DBG | unable to find current IP address of domain ha-269108-m02 in network mk-ha-269108
	I0719 18:31:29.801961   33150 main.go:141] libmachine: (ha-269108-m02) DBG | I0719 18:31:29.801875   33536 retry.go:31] will retry after 322.662599ms: waiting for machine to come up
	I0719 18:31:30.126636   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:30.127043   33150 main.go:141] libmachine: (ha-269108-m02) DBG | unable to find current IP address of domain ha-269108-m02 in network mk-ha-269108
	I0719 18:31:30.127075   33150 main.go:141] libmachine: (ha-269108-m02) DBG | I0719 18:31:30.126979   33536 retry.go:31] will retry after 468.612553ms: waiting for machine to come up
	I0719 18:31:30.597628   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:30.597984   33150 main.go:141] libmachine: (ha-269108-m02) DBG | unable to find current IP address of domain ha-269108-m02 in network mk-ha-269108
	I0719 18:31:30.598014   33150 main.go:141] libmachine: (ha-269108-m02) DBG | I0719 18:31:30.597947   33536 retry.go:31] will retry after 487.953425ms: waiting for machine to come up
	I0719 18:31:31.087648   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:31.088143   33150 main.go:141] libmachine: (ha-269108-m02) DBG | unable to find current IP address of domain ha-269108-m02 in network mk-ha-269108
	I0719 18:31:31.088173   33150 main.go:141] libmachine: (ha-269108-m02) DBG | I0719 18:31:31.088068   33536 retry.go:31] will retry after 864.735779ms: waiting for machine to come up
	I0719 18:31:31.953916   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:31.954368   33150 main.go:141] libmachine: (ha-269108-m02) DBG | unable to find current IP address of domain ha-269108-m02 in network mk-ha-269108
	I0719 18:31:31.954398   33150 main.go:141] libmachine: (ha-269108-m02) DBG | I0719 18:31:31.954322   33536 retry.go:31] will retry after 960.278085ms: waiting for machine to come up
	I0719 18:31:32.915680   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:32.916153   33150 main.go:141] libmachine: (ha-269108-m02) DBG | unable to find current IP address of domain ha-269108-m02 in network mk-ha-269108
	I0719 18:31:32.916182   33150 main.go:141] libmachine: (ha-269108-m02) DBG | I0719 18:31:32.916106   33536 retry.go:31] will retry after 1.384009045s: waiting for machine to come up
	I0719 18:31:34.301188   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:34.301541   33150 main.go:141] libmachine: (ha-269108-m02) DBG | unable to find current IP address of domain ha-269108-m02 in network mk-ha-269108
	I0719 18:31:34.301564   33150 main.go:141] libmachine: (ha-269108-m02) DBG | I0719 18:31:34.301505   33536 retry.go:31] will retry after 1.396513064s: waiting for machine to come up
	I0719 18:31:35.700082   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:35.700506   33150 main.go:141] libmachine: (ha-269108-m02) DBG | unable to find current IP address of domain ha-269108-m02 in network mk-ha-269108
	I0719 18:31:35.700543   33150 main.go:141] libmachine: (ha-269108-m02) DBG | I0719 18:31:35.700424   33536 retry.go:31] will retry after 2.311417499s: waiting for machine to come up
	I0719 18:31:38.013930   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:38.014402   33150 main.go:141] libmachine: (ha-269108-m02) DBG | unable to find current IP address of domain ha-269108-m02 in network mk-ha-269108
	I0719 18:31:38.014443   33150 main.go:141] libmachine: (ha-269108-m02) DBG | I0719 18:31:38.014339   33536 retry.go:31] will retry after 2.87643737s: waiting for machine to come up
	I0719 18:31:40.894223   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:40.894675   33150 main.go:141] libmachine: (ha-269108-m02) DBG | unable to find current IP address of domain ha-269108-m02 in network mk-ha-269108
	I0719 18:31:40.894697   33150 main.go:141] libmachine: (ha-269108-m02) DBG | I0719 18:31:40.894639   33536 retry.go:31] will retry after 3.244338926s: waiting for machine to come up
	I0719 18:31:44.140898   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:44.141390   33150 main.go:141] libmachine: (ha-269108-m02) DBG | unable to find current IP address of domain ha-269108-m02 in network mk-ha-269108
	I0719 18:31:44.141418   33150 main.go:141] libmachine: (ha-269108-m02) DBG | I0719 18:31:44.141344   33536 retry.go:31] will retry after 3.922811888s: waiting for machine to come up
	I0719 18:31:48.065990   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:48.066396   33150 main.go:141] libmachine: (ha-269108-m02) Found IP for machine: 192.168.39.87
	I0719 18:31:48.066425   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has current primary IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:48.066434   33150 main.go:141] libmachine: (ha-269108-m02) Reserving static IP address...
	I0719 18:31:48.066744   33150 main.go:141] libmachine: (ha-269108-m02) DBG | unable to find host DHCP lease matching {name: "ha-269108-m02", mac: "52:54:00:6e:a7:f0", ip: "192.168.39.87"} in network mk-ha-269108
	I0719 18:31:48.137287   33150 main.go:141] libmachine: (ha-269108-m02) DBG | Getting to WaitForSSH function...
	I0719 18:31:48.137350   33150 main.go:141] libmachine: (ha-269108-m02) Reserved static IP address: 192.168.39.87
	I0719 18:31:48.137394   33150 main.go:141] libmachine: (ha-269108-m02) Waiting for SSH to be available...
	I0719 18:31:48.139982   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:48.140432   33150 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:31:48.140459   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:48.140484   33150 main.go:141] libmachine: (ha-269108-m02) DBG | Using SSH client type: external
	I0719 18:31:48.140523   33150 main.go:141] libmachine: (ha-269108-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m02/id_rsa (-rw-------)
	I0719 18:31:48.140573   33150 main.go:141] libmachine: (ha-269108-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.87 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 18:31:48.140595   33150 main.go:141] libmachine: (ha-269108-m02) DBG | About to run SSH command:
	I0719 18:31:48.140627   33150 main.go:141] libmachine: (ha-269108-m02) DBG | exit 0
	I0719 18:31:48.267664   33150 main.go:141] libmachine: (ha-269108-m02) DBG | SSH cmd err, output: <nil>: 
	I0719 18:31:48.267999   33150 main.go:141] libmachine: (ha-269108-m02) KVM machine creation complete!
	I0719 18:31:48.268268   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetConfigRaw
	I0719 18:31:48.268829   33150 main.go:141] libmachine: (ha-269108-m02) Calling .DriverName
	I0719 18:31:48.269049   33150 main.go:141] libmachine: (ha-269108-m02) Calling .DriverName
	I0719 18:31:48.269198   33150 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0719 18:31:48.269213   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetState
	I0719 18:31:48.270342   33150 main.go:141] libmachine: Detecting operating system of created instance...
	I0719 18:31:48.270358   33150 main.go:141] libmachine: Waiting for SSH to be available...
	I0719 18:31:48.270366   33150 main.go:141] libmachine: Getting to WaitForSSH function...
	I0719 18:31:48.270374   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHHostname
	I0719 18:31:48.272503   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:48.272832   33150 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:31:48.272868   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:48.273041   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHPort
	I0719 18:31:48.273208   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:31:48.273343   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:31:48.273482   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHUsername
	I0719 18:31:48.273668   33150 main.go:141] libmachine: Using SSH client type: native
	I0719 18:31:48.273928   33150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0719 18:31:48.273943   33150 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0719 18:31:48.382965   33150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 18:31:48.382985   33150 main.go:141] libmachine: Detecting the provisioner...
	I0719 18:31:48.382992   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHHostname
	I0719 18:31:48.385791   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:48.386183   33150 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:31:48.386226   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:48.386322   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHPort
	I0719 18:31:48.386486   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:31:48.386626   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:31:48.386788   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHUsername
	I0719 18:31:48.386932   33150 main.go:141] libmachine: Using SSH client type: native
	I0719 18:31:48.387086   33150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0719 18:31:48.387096   33150 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0719 18:31:48.500066   33150 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0719 18:31:48.500125   33150 main.go:141] libmachine: found compatible host: buildroot
	I0719 18:31:48.500131   33150 main.go:141] libmachine: Provisioning with buildroot...
	I0719 18:31:48.500139   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetMachineName
	I0719 18:31:48.500359   33150 buildroot.go:166] provisioning hostname "ha-269108-m02"
	I0719 18:31:48.500374   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetMachineName
	I0719 18:31:48.500551   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHHostname
	I0719 18:31:48.503114   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:48.503465   33150 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:31:48.503491   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:48.503637   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHPort
	I0719 18:31:48.503797   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:31:48.503935   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:31:48.504110   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHUsername
	I0719 18:31:48.504301   33150 main.go:141] libmachine: Using SSH client type: native
	I0719 18:31:48.504457   33150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0719 18:31:48.504471   33150 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-269108-m02 && echo "ha-269108-m02" | sudo tee /etc/hostname
	I0719 18:31:48.629059   33150 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-269108-m02
	
	I0719 18:31:48.629092   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHHostname
	I0719 18:31:48.631631   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:48.631952   33150 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:31:48.631983   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:48.632142   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHPort
	I0719 18:31:48.632361   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:31:48.632621   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:31:48.632767   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHUsername
	I0719 18:31:48.632932   33150 main.go:141] libmachine: Using SSH client type: native
	I0719 18:31:48.633111   33150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0719 18:31:48.633127   33150 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-269108-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-269108-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-269108-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 18:31:48.752791   33150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 18:31:48.752816   33150 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 18:31:48.752835   33150 buildroot.go:174] setting up certificates
	I0719 18:31:48.752847   33150 provision.go:84] configureAuth start
	I0719 18:31:48.752859   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetMachineName
	I0719 18:31:48.753109   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetIP
	I0719 18:31:48.755705   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:48.756161   33150 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:31:48.756190   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:48.756359   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHHostname
	I0719 18:31:48.758433   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:48.758833   33150 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:31:48.758857   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:48.758987   33150 provision.go:143] copyHostCerts
	I0719 18:31:48.759019   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 18:31:48.759046   33150 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 18:31:48.759054   33150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 18:31:48.759114   33150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 18:31:48.759197   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 18:31:48.759213   33150 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 18:31:48.759220   33150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 18:31:48.759244   33150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 18:31:48.759297   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 18:31:48.759313   33150 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 18:31:48.759319   33150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 18:31:48.759339   33150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 18:31:48.759401   33150 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.ha-269108-m02 san=[127.0.0.1 192.168.39.87 ha-269108-m02 localhost minikube]
	I0719 18:31:48.853483   33150 provision.go:177] copyRemoteCerts
	I0719 18:31:48.853535   33150 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 18:31:48.853556   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHHostname
	I0719 18:31:48.856270   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:48.856612   33150 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:31:48.856635   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:48.856853   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHPort
	I0719 18:31:48.857021   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:31:48.857159   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHUsername
	I0719 18:31:48.857312   33150 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m02/id_rsa Username:docker}
	I0719 18:31:48.941122   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0719 18:31:48.941189   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 18:31:48.964885   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0719 18:31:48.964951   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0719 18:31:48.986222   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0719 18:31:48.986286   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 18:31:49.007502   33150 provision.go:87] duration metric: took 254.642471ms to configureAuth
	I0719 18:31:49.007530   33150 buildroot.go:189] setting minikube options for container-runtime
	I0719 18:31:49.007696   33150 config.go:182] Loaded profile config "ha-269108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:31:49.007761   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHHostname
	I0719 18:31:49.010388   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:49.010739   33150 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:31:49.010760   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:49.010947   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHPort
	I0719 18:31:49.011104   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:31:49.011265   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:31:49.011369   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHUsername
	I0719 18:31:49.011542   33150 main.go:141] libmachine: Using SSH client type: native
	I0719 18:31:49.011699   33150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0719 18:31:49.011714   33150 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 18:31:49.270448   33150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 18:31:49.270472   33150 main.go:141] libmachine: Checking connection to Docker...
	I0719 18:31:49.270493   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetURL
	I0719 18:31:49.271746   33150 main.go:141] libmachine: (ha-269108-m02) DBG | Using libvirt version 6000000
	I0719 18:31:49.274029   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:49.274405   33150 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:31:49.274422   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:49.274590   33150 main.go:141] libmachine: Docker is up and running!
	I0719 18:31:49.274608   33150 main.go:141] libmachine: Reticulating splines...
	I0719 18:31:49.274614   33150 client.go:171] duration metric: took 21.614780936s to LocalClient.Create
	I0719 18:31:49.274635   33150 start.go:167] duration metric: took 21.614836287s to libmachine.API.Create "ha-269108"
	I0719 18:31:49.274647   33150 start.go:293] postStartSetup for "ha-269108-m02" (driver="kvm2")
	I0719 18:31:49.274659   33150 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 18:31:49.274687   33150 main.go:141] libmachine: (ha-269108-m02) Calling .DriverName
	I0719 18:31:49.274940   33150 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 18:31:49.274964   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHHostname
	I0719 18:31:49.277116   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:49.277417   33150 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:31:49.277444   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:49.277595   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHPort
	I0719 18:31:49.277789   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:31:49.277941   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHUsername
	I0719 18:31:49.278077   33150 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m02/id_rsa Username:docker}
	I0719 18:31:49.361625   33150 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 18:31:49.365306   33150 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 18:31:49.365354   33150 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 18:31:49.365424   33150 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 18:31:49.365494   33150 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 18:31:49.365502   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> /etc/ssl/certs/220282.pem
	I0719 18:31:49.365602   33150 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 18:31:49.374029   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 18:31:49.395428   33150 start.go:296] duration metric: took 120.768619ms for postStartSetup
	I0719 18:31:49.395482   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetConfigRaw
	I0719 18:31:49.396082   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetIP
	I0719 18:31:49.398511   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:49.398799   33150 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:31:49.398829   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:49.399014   33150 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/config.json ...
	I0719 18:31:49.399184   33150 start.go:128] duration metric: took 21.758093311s to createHost
	I0719 18:31:49.399206   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHHostname
	I0719 18:31:49.401491   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:49.401846   33150 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:31:49.401874   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:49.402007   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHPort
	I0719 18:31:49.402221   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:31:49.402403   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:31:49.402557   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHUsername
	I0719 18:31:49.402723   33150 main.go:141] libmachine: Using SSH client type: native
	I0719 18:31:49.402873   33150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0719 18:31:49.402882   33150 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 18:31:49.512177   33150 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721413909.485387400
	
	I0719 18:31:49.512200   33150 fix.go:216] guest clock: 1721413909.485387400
	I0719 18:31:49.512209   33150 fix.go:229] Guest: 2024-07-19 18:31:49.4853874 +0000 UTC Remote: 2024-07-19 18:31:49.399195075 +0000 UTC m=+78.012199986 (delta=86.192325ms)
	I0719 18:31:49.512228   33150 fix.go:200] guest clock delta is within tolerance: 86.192325ms
	I0719 18:31:49.512235   33150 start.go:83] releasing machines lock for "ha-269108-m02", held for 21.871243011s
	I0719 18:31:49.512256   33150 main.go:141] libmachine: (ha-269108-m02) Calling .DriverName
	I0719 18:31:49.512511   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetIP
	I0719 18:31:49.514775   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:49.515074   33150 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:31:49.515098   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:49.517294   33150 out.go:177] * Found network options:
	I0719 18:31:49.518577   33150 out.go:177]   - NO_PROXY=192.168.39.163
	W0719 18:31:49.519614   33150 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 18:31:49.519651   33150 main.go:141] libmachine: (ha-269108-m02) Calling .DriverName
	I0719 18:31:49.520179   33150 main.go:141] libmachine: (ha-269108-m02) Calling .DriverName
	I0719 18:31:49.520388   33150 main.go:141] libmachine: (ha-269108-m02) Calling .DriverName
	I0719 18:31:49.520483   33150 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 18:31:49.520518   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHHostname
	W0719 18:31:49.520578   33150 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 18:31:49.520650   33150 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 18:31:49.520674   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHHostname
	I0719 18:31:49.523256   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:49.523637   33150 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:31:49.523667   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:49.523687   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:49.523907   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHPort
	I0719 18:31:49.524120   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:31:49.524229   33150 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:31:49.524265   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:49.524308   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHUsername
	I0719 18:31:49.524388   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHPort
	I0719 18:31:49.524482   33150 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m02/id_rsa Username:docker}
	I0719 18:31:49.524563   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:31:49.524725   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHUsername
	I0719 18:31:49.524857   33150 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m02/id_rsa Username:docker}
	I0719 18:31:49.755543   33150 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 18:31:49.761508   33150 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 18:31:49.761577   33150 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 18:31:49.777307   33150 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 18:31:49.777327   33150 start.go:495] detecting cgroup driver to use...
	I0719 18:31:49.777390   33150 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 18:31:49.794706   33150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 18:31:49.808829   33150 docker.go:217] disabling cri-docker service (if available) ...
	I0719 18:31:49.808883   33150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 18:31:49.821430   33150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 18:31:49.833888   33150 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 18:31:49.951895   33150 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 18:31:50.105390   33150 docker.go:233] disabling docker service ...
	I0719 18:31:50.105449   33150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 18:31:50.118523   33150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 18:31:50.130348   33150 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 18:31:50.241256   33150 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 18:31:50.369648   33150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 18:31:50.382199   33150 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 18:31:50.398418   33150 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 18:31:50.398484   33150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:31:50.407703   33150 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 18:31:50.407769   33150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:31:50.417097   33150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:31:50.426706   33150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:31:50.436105   33150 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 18:31:50.445669   33150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:31:50.454852   33150 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:31:50.469899   33150 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:31:50.479120   33150 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 18:31:50.487661   33150 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 18:31:50.487708   33150 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 18:31:50.499727   33150 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 18:31:50.508159   33150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 18:31:50.615676   33150 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 18:31:50.746884   33150 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 18:31:50.746963   33150 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 18:31:50.751800   33150 start.go:563] Will wait 60s for crictl version
	I0719 18:31:50.751867   33150 ssh_runner.go:195] Run: which crictl
	I0719 18:31:50.755191   33150 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 18:31:50.794044   33150 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 18:31:50.794118   33150 ssh_runner.go:195] Run: crio --version
	I0719 18:31:50.819862   33150 ssh_runner.go:195] Run: crio --version
	I0719 18:31:50.846764   33150 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 18:31:50.848070   33150 out.go:177]   - env NO_PROXY=192.168.39.163
	I0719 18:31:50.849217   33150 main.go:141] libmachine: (ha-269108-m02) Calling .GetIP
	I0719 18:31:50.851695   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:50.852032   33150 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:31:41 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:31:50.852117   33150 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:31:50.852245   33150 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 18:31:50.856088   33150 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 18:31:50.867464   33150 mustload.go:65] Loading cluster: ha-269108
	I0719 18:31:50.867662   33150 config.go:182] Loaded profile config "ha-269108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:31:50.867935   33150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:31:50.867970   33150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:31:50.883089   33150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39885
	I0719 18:31:50.883480   33150 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:31:50.883925   33150 main.go:141] libmachine: Using API Version  1
	I0719 18:31:50.883945   33150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:31:50.884249   33150 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:31:50.884454   33150 main.go:141] libmachine: (ha-269108) Calling .GetState
	I0719 18:31:50.886105   33150 host.go:66] Checking if "ha-269108" exists ...
	I0719 18:31:50.886456   33150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:31:50.886496   33150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:31:50.900816   33150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34207
	I0719 18:31:50.901175   33150 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:31:50.901542   33150 main.go:141] libmachine: Using API Version  1
	I0719 18:31:50.901561   33150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:31:50.901797   33150 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:31:50.901959   33150 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:31:50.902091   33150 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108 for IP: 192.168.39.87
	I0719 18:31:50.902100   33150 certs.go:194] generating shared ca certs ...
	I0719 18:31:50.902112   33150 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:31:50.902219   33150 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 18:31:50.902265   33150 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 18:31:50.902275   33150 certs.go:256] generating profile certs ...
	I0719 18:31:50.902336   33150 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/client.key
	I0719 18:31:50.902370   33150 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key.28a107b8
	I0719 18:31:50.902384   33150 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt.28a107b8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.163 192.168.39.87 192.168.39.254]
	I0719 18:31:50.955381   33150 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt.28a107b8 ...
	I0719 18:31:50.955406   33150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt.28a107b8: {Name:mk1ed57d246dd4a25373bc959e961930df56fedc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:31:50.955579   33150 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key.28a107b8 ...
	I0719 18:31:50.955595   33150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key.28a107b8: {Name:mka4a4e0c034fd089f31c9a0ff0a6ffd5c63ebcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:31:50.955687   33150 certs.go:381] copying /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt.28a107b8 -> /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt
	I0719 18:31:50.955814   33150 certs.go:385] copying /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key.28a107b8 -> /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key
	I0719 18:31:50.955974   33150 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.key
	I0719 18:31:50.955989   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 18:31:50.956002   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0719 18:31:50.956014   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 18:31:50.956026   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 18:31:50.956038   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 18:31:50.956050   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 18:31:50.956061   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 18:31:50.956073   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 18:31:50.956119   33150 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 18:31:50.956144   33150 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 18:31:50.956154   33150 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 18:31:50.956176   33150 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 18:31:50.956197   33150 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 18:31:50.956217   33150 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 18:31:50.956250   33150 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 18:31:50.956276   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem -> /usr/share/ca-certificates/22028.pem
	I0719 18:31:50.956288   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> /usr/share/ca-certificates/220282.pem
	I0719 18:31:50.956299   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 18:31:50.956328   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:31:50.959084   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:31:50.959458   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:31:50.959483   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:31:50.959678   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:31:50.959877   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:31:50.960013   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:31:50.960155   33150 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	I0719 18:31:51.028284   33150 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0719 18:31:51.032685   33150 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0719 18:31:51.042270   33150 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0719 18:31:51.045880   33150 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0719 18:31:51.055032   33150 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0719 18:31:51.058663   33150 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0719 18:31:51.067350   33150 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0719 18:31:51.070879   33150 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0719 18:31:51.080056   33150 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0719 18:31:51.083776   33150 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0719 18:31:51.092276   33150 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0719 18:31:51.096014   33150 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0719 18:31:51.104870   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 18:31:51.128000   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 18:31:51.149479   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 18:31:51.172303   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 18:31:51.194643   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0719 18:31:51.217771   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 18:31:51.241808   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 18:31:51.263063   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 18:31:51.285347   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 18:31:51.305989   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 18:31:51.326648   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 18:31:51.347173   33150 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0719 18:31:51.361828   33150 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0719 18:31:51.376656   33150 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0719 18:31:51.392563   33150 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0719 18:31:51.408924   33150 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0719 18:31:51.425071   33150 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0719 18:31:51.440006   33150 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0719 18:31:51.455007   33150 ssh_runner.go:195] Run: openssl version
	I0719 18:31:51.460909   33150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 18:31:51.471462   33150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 18:31:51.475571   33150 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 18:31:51.475631   33150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 18:31:51.481146   33150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 18:31:51.490915   33150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 18:31:51.500570   33150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 18:31:51.504339   33150 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 18:31:51.504391   33150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 18:31:51.509772   33150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 18:31:51.520293   33150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 18:31:51.530596   33150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 18:31:51.534867   33150 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 18:31:51.534917   33150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 18:31:51.540186   33150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 18:31:51.550405   33150 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 18:31:51.554118   33150 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 18:31:51.554175   33150 kubeadm.go:934] updating node {m02 192.168.39.87 8443 v1.30.3 crio true true} ...
	I0719 18:31:51.554259   33150 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-269108-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.87
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-269108 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 18:31:51.554283   33150 kube-vip.go:115] generating kube-vip config ...
	I0719 18:31:51.554316   33150 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0719 18:31:51.569669   33150 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0719 18:31:51.569727   33150 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0719 18:31:51.569768   33150 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 18:31:51.578609   33150 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0719 18:31:51.578660   33150 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0719 18:31:51.587117   33150 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0719 18:31:51.587143   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0719 18:31:51.587198   33150 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19307-14841/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0719 18:31:51.587213   33150 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19307-14841/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0719 18:31:51.587220   33150 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0719 18:31:51.591409   33150 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0719 18:31:51.591430   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0719 18:31:52.358680   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0719 18:31:52.358762   33150 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0719 18:31:52.364051   33150 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0719 18:31:52.364086   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0719 18:31:52.771265   33150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 18:31:52.785576   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0719 18:31:52.785678   33150 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0719 18:31:52.790198   33150 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0719 18:31:52.790229   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0719 18:31:53.142196   33150 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0719 18:31:53.150942   33150 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0719 18:31:53.165881   33150 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 18:31:53.181165   33150 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0719 18:31:53.196194   33150 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0719 18:31:53.199907   33150 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 18:31:53.211185   33150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 18:31:53.333315   33150 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 18:31:53.350812   33150 host.go:66] Checking if "ha-269108" exists ...
	I0719 18:31:53.351157   33150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:31:53.351194   33150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:31:53.365717   33150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34501
	I0719 18:31:53.366179   33150 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:31:53.366658   33150 main.go:141] libmachine: Using API Version  1
	I0719 18:31:53.366681   33150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:31:53.366990   33150 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:31:53.367182   33150 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:31:53.367333   33150 start.go:317] joinCluster: &{Name:ha-269108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-269108 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.87 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 18:31:53.367456   33150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0719 18:31:53.367484   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:31:53.370868   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:31:53.371295   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:31:53.371322   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:31:53.371484   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:31:53.371660   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:31:53.371847   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:31:53.371977   33150 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	I0719 18:31:53.517186   33150 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.87 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 18:31:53.517234   33150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token iipf47.o9mc099cey98md9b --discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-269108-m02 --control-plane --apiserver-advertise-address=192.168.39.87 --apiserver-bind-port=8443"
	I0719 18:32:15.605699   33150 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token iipf47.o9mc099cey98md9b --discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-269108-m02 --control-plane --apiserver-advertise-address=192.168.39.87 --apiserver-bind-port=8443": (22.088445512s)
	I0719 18:32:15.605734   33150 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0719 18:32:16.056967   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-269108-m02 minikube.k8s.io/updated_at=2024_07_19T18_32_16_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221 minikube.k8s.io/name=ha-269108 minikube.k8s.io/primary=false
	I0719 18:32:16.185445   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-269108-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0719 18:32:16.300180   33150 start.go:319] duration metric: took 22.932844917s to joinCluster
	I0719 18:32:16.300257   33150 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.87 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 18:32:16.300538   33150 config.go:182] Loaded profile config "ha-269108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:32:16.301754   33150 out.go:177] * Verifying Kubernetes components...
	I0719 18:32:16.303049   33150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 18:32:16.567441   33150 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 18:32:16.619902   33150 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 18:32:16.620244   33150 kapi.go:59] client config for ha-269108: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/client.crt", KeyFile:"/home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/client.key", CAFile:"/home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0719 18:32:16.620383   33150 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.163:8443
	I0719 18:32:16.620647   33150 node_ready.go:35] waiting up to 6m0s for node "ha-269108-m02" to be "Ready" ...
	I0719 18:32:16.620730   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:16.620740   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:16.620751   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:16.620760   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:16.629506   33150 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0719 18:32:17.120918   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:17.120941   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:17.120950   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:17.120954   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:17.124699   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:17.621688   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:17.621711   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:17.621719   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:17.621723   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:17.626698   33150 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 18:32:18.121183   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:18.121206   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:18.121214   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:18.121218   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:18.137958   33150 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0719 18:32:18.621462   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:18.621486   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:18.621495   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:18.621502   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:18.625196   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:18.625846   33150 node_ready.go:53] node "ha-269108-m02" has status "Ready":"False"
	I0719 18:32:19.121195   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:19.121222   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:19.121234   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:19.121239   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:19.125364   33150 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 18:32:19.621046   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:19.621065   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:19.621071   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:19.621076   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:19.623975   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:32:20.121666   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:20.121703   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:20.121713   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:20.121718   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:20.126700   33150 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 18:32:20.620940   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:20.620967   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:20.620978   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:20.620983   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:20.624350   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:21.121399   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:21.121420   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:21.121428   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:21.121432   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:21.124874   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:21.125427   33150 node_ready.go:53] node "ha-269108-m02" has status "Ready":"False"
	I0719 18:32:21.621480   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:21.621506   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:21.621518   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:21.621523   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:21.624698   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:22.121722   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:22.121743   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:22.121752   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:22.121756   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:22.125212   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:22.620856   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:22.620877   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:22.620884   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:22.620887   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:22.624701   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:23.121679   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:23.121700   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:23.121710   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:23.121716   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:23.124780   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:23.620976   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:23.620999   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:23.621008   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:23.621012   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:23.624839   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:23.625621   33150 node_ready.go:53] node "ha-269108-m02" has status "Ready":"False"
	I0719 18:32:24.121520   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:24.121544   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:24.121551   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:24.121557   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:24.124875   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:24.621856   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:24.621877   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:24.621885   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:24.621889   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:24.625201   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:25.121308   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:25.121333   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:25.121345   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:25.121353   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:25.124441   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:25.621478   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:25.621503   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:25.621510   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:25.621514   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:25.624753   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:26.121726   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:26.121747   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:26.121754   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:26.121757   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:26.124654   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:32:26.125240   33150 node_ready.go:53] node "ha-269108-m02" has status "Ready":"False"
	I0719 18:32:26.621299   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:26.621320   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:26.621328   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:26.621332   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:26.624263   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:32:27.121542   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:27.121565   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:27.121577   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:27.121581   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:27.125106   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:27.621862   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:27.621880   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:27.621888   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:27.621892   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:27.625552   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:28.121495   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:28.121517   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:28.121526   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:28.121531   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:28.125339   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:28.125816   33150 node_ready.go:53] node "ha-269108-m02" has status "Ready":"False"
	I0719 18:32:28.621571   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:28.621595   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:28.621602   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:28.621607   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:28.625396   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:29.121215   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:29.121236   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:29.121244   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:29.121247   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:29.124255   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:32:29.621182   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:29.621209   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:29.621221   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:29.621226   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:29.624761   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:30.121745   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:30.121776   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:30.121792   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:30.121796   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:30.124764   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:32:30.621862   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:30.621882   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:30.621892   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:30.621898   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:30.628276   33150 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 18:32:30.628857   33150 node_ready.go:53] node "ha-269108-m02" has status "Ready":"False"
	I0719 18:32:31.121057   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:31.121079   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:31.121089   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:31.121094   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:31.124779   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:31.621747   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:31.621769   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:31.621779   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:31.621783   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:31.625218   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:32.121237   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:32.121259   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:32.121267   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:32.121271   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:32.124400   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:32.621474   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:32.621496   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:32.621507   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:32.621511   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:32.624876   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:33.121293   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:33.121326   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:33.121353   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:33.121361   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:33.124364   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:32:33.124991   33150 node_ready.go:53] node "ha-269108-m02" has status "Ready":"False"
	I0719 18:32:33.620825   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:33.620845   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:33.620853   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:33.620855   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:33.623974   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:34.120884   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:34.120909   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:34.120917   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:34.120921   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:34.123974   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:34.620836   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:34.620861   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:34.620871   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:34.620877   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:34.624478   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:35.121110   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:35.121139   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:35.121151   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:35.121156   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:35.124402   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:35.125038   33150 node_ready.go:49] node "ha-269108-m02" has status "Ready":"True"
	I0719 18:32:35.125055   33150 node_ready.go:38] duration metric: took 18.504392019s for node "ha-269108-m02" to be "Ready" ...
	I0719 18:32:35.125065   33150 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 18:32:35.125125   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods
	I0719 18:32:35.125133   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:35.125140   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:35.125144   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:35.129441   33150 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 18:32:35.135047   33150 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-frb69" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:35.135129   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-frb69
	I0719 18:32:35.135138   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:35.135146   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:35.135150   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:35.138040   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:32:35.138615   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:32:35.138630   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:35.138643   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:35.138647   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:35.140697   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:32:35.141184   33150 pod_ready.go:92] pod "coredns-7db6d8ff4d-frb69" in "kube-system" namespace has status "Ready":"True"
	I0719 18:32:35.141200   33150 pod_ready.go:81] duration metric: took 6.134036ms for pod "coredns-7db6d8ff4d-frb69" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:35.141208   33150 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zxh8h" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:35.141250   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-zxh8h
	I0719 18:32:35.141257   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:35.141263   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:35.141267   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:35.143552   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:32:35.144142   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:32:35.144167   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:35.144174   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:35.144179   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:35.146257   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:32:35.146773   33150 pod_ready.go:92] pod "coredns-7db6d8ff4d-zxh8h" in "kube-system" namespace has status "Ready":"True"
	I0719 18:32:35.146789   33150 pod_ready.go:81] duration metric: took 5.574926ms for pod "coredns-7db6d8ff4d-zxh8h" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:35.146800   33150 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-269108" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:35.146859   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/etcd-ha-269108
	I0719 18:32:35.146868   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:35.146878   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:35.146889   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:35.148706   33150 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 18:32:35.149127   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:32:35.149141   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:35.149148   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:35.149153   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:35.150865   33150 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 18:32:35.151218   33150 pod_ready.go:92] pod "etcd-ha-269108" in "kube-system" namespace has status "Ready":"True"
	I0719 18:32:35.151231   33150 pod_ready.go:81] duration metric: took 4.424138ms for pod "etcd-ha-269108" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:35.151241   33150 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-269108-m02" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:35.151288   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/etcd-ha-269108-m02
	I0719 18:32:35.151298   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:35.151307   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:35.151313   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:35.153285   33150 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 18:32:35.153799   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:35.153812   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:35.153818   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:35.153823   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:35.155775   33150 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 18:32:35.156211   33150 pod_ready.go:92] pod "etcd-ha-269108-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 18:32:35.156228   33150 pod_ready.go:81] duration metric: took 4.980654ms for pod "etcd-ha-269108-m02" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:35.156243   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-269108" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:35.321660   33150 request.go:629] Waited for 165.362561ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-269108
	I0719 18:32:35.321748   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-269108
	I0719 18:32:35.321760   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:35.321771   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:35.321779   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:35.325238   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:35.521120   33150 request.go:629] Waited for 195.275453ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:32:35.521183   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:32:35.521188   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:35.521195   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:35.521200   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:35.524114   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:32:35.524681   33150 pod_ready.go:92] pod "kube-apiserver-ha-269108" in "kube-system" namespace has status "Ready":"True"
	I0719 18:32:35.524698   33150 pod_ready.go:81] duration metric: took 368.449533ms for pod "kube-apiserver-ha-269108" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:35.524708   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-269108-m02" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:35.721824   33150 request.go:629] Waited for 197.054878ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-269108-m02
	I0719 18:32:35.721881   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-269108-m02
	I0719 18:32:35.721886   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:35.721903   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:35.721910   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:35.725507   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:35.921084   33150 request.go:629] Waited for 194.711523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:35.921172   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:35.921185   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:35.921195   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:35.921211   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:35.923984   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:32:35.924837   33150 pod_ready.go:92] pod "kube-apiserver-ha-269108-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 18:32:35.924855   33150 pod_ready.go:81] duration metric: took 400.138736ms for pod "kube-apiserver-ha-269108-m02" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:35.924864   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-269108" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:36.121883   33150 request.go:629] Waited for 196.949216ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-269108
	I0719 18:32:36.121939   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-269108
	I0719 18:32:36.121946   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:36.121955   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:36.121960   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:36.125180   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:36.321250   33150 request.go:629] Waited for 195.30303ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:32:36.321325   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:32:36.321331   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:36.321339   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:36.321344   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:36.324367   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:36.324924   33150 pod_ready.go:92] pod "kube-controller-manager-ha-269108" in "kube-system" namespace has status "Ready":"True"
	I0719 18:32:36.324941   33150 pod_ready.go:81] duration metric: took 400.071173ms for pod "kube-controller-manager-ha-269108" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:36.324954   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-269108-m02" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:36.521100   33150 request.go:629] Waited for 196.078894ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-269108-m02
	I0719 18:32:36.521164   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-269108-m02
	I0719 18:32:36.521170   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:36.521181   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:36.521189   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:36.524882   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:36.721824   33150 request.go:629] Waited for 196.361603ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:36.721882   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:36.721897   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:36.721905   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:36.721911   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:36.725292   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:36.725975   33150 pod_ready.go:92] pod "kube-controller-manager-ha-269108-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 18:32:36.725997   33150 pod_ready.go:81] duration metric: took 401.035221ms for pod "kube-controller-manager-ha-269108-m02" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:36.726010   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pkjns" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:36.922016   33150 request.go:629] Waited for 195.942105ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pkjns
	I0719 18:32:36.922073   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pkjns
	I0719 18:32:36.922078   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:36.922085   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:36.922088   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:36.925675   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:37.121654   33150 request.go:629] Waited for 195.361094ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:32:37.121714   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:32:37.121721   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:37.121730   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:37.121734   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:37.125342   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:37.125923   33150 pod_ready.go:92] pod "kube-proxy-pkjns" in "kube-system" namespace has status "Ready":"True"
	I0719 18:32:37.125942   33150 pod_ready.go:81] duration metric: took 399.924914ms for pod "kube-proxy-pkjns" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:37.125954   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vr7bz" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:37.321089   33150 request.go:629] Waited for 195.06817ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vr7bz
	I0719 18:32:37.321153   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vr7bz
	I0719 18:32:37.321159   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:37.321166   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:37.321169   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:37.325503   33150 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 18:32:37.521495   33150 request.go:629] Waited for 195.35182ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:37.521552   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:37.521557   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:37.521565   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:37.521569   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:37.525645   33150 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 18:32:37.526356   33150 pod_ready.go:92] pod "kube-proxy-vr7bz" in "kube-system" namespace has status "Ready":"True"
	I0719 18:32:37.526371   33150 pod_ready.go:81] duration metric: took 400.410651ms for pod "kube-proxy-vr7bz" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:37.526389   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-269108" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:37.721967   33150 request.go:629] Waited for 195.499184ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-269108
	I0719 18:32:37.722017   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-269108
	I0719 18:32:37.722022   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:37.722029   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:37.722034   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:37.725392   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:37.921256   33150 request.go:629] Waited for 195.292837ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:32:37.921306   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:32:37.921311   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:37.921320   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:37.921325   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:37.924807   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:37.925240   33150 pod_ready.go:92] pod "kube-scheduler-ha-269108" in "kube-system" namespace has status "Ready":"True"
	I0719 18:32:37.925260   33150 pod_ready.go:81] duration metric: took 398.863069ms for pod "kube-scheduler-ha-269108" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:37.925272   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-269108-m02" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:38.121456   33150 request.go:629] Waited for 196.116624ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-269108-m02
	I0719 18:32:38.121515   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-269108-m02
	I0719 18:32:38.121520   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:38.121531   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:38.121538   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:38.124785   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:38.321677   33150 request.go:629] Waited for 196.388232ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:38.321743   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:32:38.321750   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:38.321759   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:38.321763   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:38.325635   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:38.326135   33150 pod_ready.go:92] pod "kube-scheduler-ha-269108-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 18:32:38.326152   33150 pod_ready.go:81] duration metric: took 400.872775ms for pod "kube-scheduler-ha-269108-m02" in "kube-system" namespace to be "Ready" ...
	I0719 18:32:38.326162   33150 pod_ready.go:38] duration metric: took 3.201086857s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 18:32:38.326176   33150 api_server.go:52] waiting for apiserver process to appear ...
	I0719 18:32:38.326221   33150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 18:32:38.341378   33150 api_server.go:72] duration metric: took 22.041085997s to wait for apiserver process to appear ...
	I0719 18:32:38.341404   33150 api_server.go:88] waiting for apiserver healthz status ...
	I0719 18:32:38.341423   33150 api_server.go:253] Checking apiserver healthz at https://192.168.39.163:8443/healthz ...
	I0719 18:32:38.345678   33150 api_server.go:279] https://192.168.39.163:8443/healthz returned 200:
	ok
	I0719 18:32:38.345745   33150 round_trippers.go:463] GET https://192.168.39.163:8443/version
	I0719 18:32:38.345752   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:38.345764   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:38.345774   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:38.346614   33150 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0719 18:32:38.346758   33150 api_server.go:141] control plane version: v1.30.3
	I0719 18:32:38.346781   33150 api_server.go:131] duration metric: took 5.370366ms to wait for apiserver health ...
	I0719 18:32:38.346789   33150 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 18:32:38.521424   33150 request.go:629] Waited for 174.561181ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods
	I0719 18:32:38.521480   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods
	I0719 18:32:38.521486   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:38.521493   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:38.521497   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:38.527885   33150 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 18:32:38.532086   33150 system_pods.go:59] 17 kube-system pods found
	I0719 18:32:38.532108   33150 system_pods.go:61] "coredns-7db6d8ff4d-frb69" [bd2dc6fa-ab54-44e1-b3b1-3e63af76140b] Running
	I0719 18:32:38.532114   33150 system_pods.go:61] "coredns-7db6d8ff4d-zxh8h" [1016c72c-a0e0-41fe-8f64-1ba03d7ce1c8] Running
	I0719 18:32:38.532117   33150 system_pods.go:61] "etcd-ha-269108" [dc807567-f8b9-44c5-abda-6dd170f77bf2] Running
	I0719 18:32:38.532121   33150 system_pods.go:61] "etcd-ha-269108-m02" [5f17ee06-e4d1-4e51-a8de-80bf7d5e644b] Running
	I0719 18:32:38.532124   33150 system_pods.go:61] "kindnet-4lhkr" [92211e52-9a67-4b01-a4af-0769596d7103] Running
	I0719 18:32:38.532127   33150 system_pods.go:61] "kindnet-fs5wk" [f5461a69-fd24-47dd-8376-f8492c8dad0b] Running
	I0719 18:32:38.532130   33150 system_pods.go:61] "kube-apiserver-ha-269108" [99659542-3ba1-4035-83a1-8729586d2cc0] Running
	I0719 18:32:38.532134   33150 system_pods.go:61] "kube-apiserver-ha-269108-m02" [2637cd4e-b5b5-4d1d-9913-c5cd000b9d48] Running
	I0719 18:32:38.532137   33150 system_pods.go:61] "kube-controller-manager-ha-269108" [def21bee-d990-48ec-88d3-5f6b8aa1f7c9] Running
	I0719 18:32:38.532140   33150 system_pods.go:61] "kube-controller-manager-ha-269108-m02" [e4074996-8a87-4788-af15-172f1ac7fe67] Running
	I0719 18:32:38.532143   33150 system_pods.go:61] "kube-proxy-pkjns" [f97d4b6b-eb80-46c3-9513-ef6e9af5318e] Running
	I0719 18:32:38.532146   33150 system_pods.go:61] "kube-proxy-vr7bz" [cae5da2f-2ee1-4b82-8471-479758fbab4f] Running
	I0719 18:32:38.532149   33150 system_pods.go:61] "kube-scheduler-ha-269108" [07b87e93-0008-46b6-90f2-3e933bb568b0] Running
	I0719 18:32:38.532152   33150 system_pods.go:61] "kube-scheduler-ha-269108-m02" [1dbd98dd-a9b7-464e-9380-d15ee99ab168] Running
	I0719 18:32:38.532155   33150 system_pods.go:61] "kube-vip-ha-269108" [a72f4143-401a-41a3-8adc-4cfc75369797] Running
	I0719 18:32:38.532157   33150 system_pods.go:61] "kube-vip-ha-269108-m02" [5e41e24e-112e-420e-b359-12a6540e9e34] Running
	I0719 18:32:38.532160   33150 system_pods.go:61] "storage-provisioner" [38aaccd6-4fb9-4744-b502-261ebeaf476a] Running
	I0719 18:32:38.532165   33150 system_pods.go:74] duration metric: took 185.367616ms to wait for pod list to return data ...
	I0719 18:32:38.532174   33150 default_sa.go:34] waiting for default service account to be created ...
	I0719 18:32:38.721593   33150 request.go:629] Waited for 189.343841ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/default/serviceaccounts
	I0719 18:32:38.721651   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/default/serviceaccounts
	I0719 18:32:38.721657   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:38.721666   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:38.721672   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:38.724957   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:38.725179   33150 default_sa.go:45] found service account: "default"
	I0719 18:32:38.725204   33150 default_sa.go:55] duration metric: took 193.024673ms for default service account to be created ...
	I0719 18:32:38.725212   33150 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 18:32:38.921656   33150 request.go:629] Waited for 196.380607ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods
	I0719 18:32:38.921728   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods
	I0719 18:32:38.921737   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:38.921744   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:38.921750   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:38.927970   33150 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 18:32:38.931858   33150 system_pods.go:86] 17 kube-system pods found
	I0719 18:32:38.931883   33150 system_pods.go:89] "coredns-7db6d8ff4d-frb69" [bd2dc6fa-ab54-44e1-b3b1-3e63af76140b] Running
	I0719 18:32:38.931889   33150 system_pods.go:89] "coredns-7db6d8ff4d-zxh8h" [1016c72c-a0e0-41fe-8f64-1ba03d7ce1c8] Running
	I0719 18:32:38.931893   33150 system_pods.go:89] "etcd-ha-269108" [dc807567-f8b9-44c5-abda-6dd170f77bf2] Running
	I0719 18:32:38.931898   33150 system_pods.go:89] "etcd-ha-269108-m02" [5f17ee06-e4d1-4e51-a8de-80bf7d5e644b] Running
	I0719 18:32:38.931902   33150 system_pods.go:89] "kindnet-4lhkr" [92211e52-9a67-4b01-a4af-0769596d7103] Running
	I0719 18:32:38.931906   33150 system_pods.go:89] "kindnet-fs5wk" [f5461a69-fd24-47dd-8376-f8492c8dad0b] Running
	I0719 18:32:38.931910   33150 system_pods.go:89] "kube-apiserver-ha-269108" [99659542-3ba1-4035-83a1-8729586d2cc0] Running
	I0719 18:32:38.931914   33150 system_pods.go:89] "kube-apiserver-ha-269108-m02" [2637cd4e-b5b5-4d1d-9913-c5cd000b9d48] Running
	I0719 18:32:38.931919   33150 system_pods.go:89] "kube-controller-manager-ha-269108" [def21bee-d990-48ec-88d3-5f6b8aa1f7c9] Running
	I0719 18:32:38.931925   33150 system_pods.go:89] "kube-controller-manager-ha-269108-m02" [e4074996-8a87-4788-af15-172f1ac7fe67] Running
	I0719 18:32:38.931929   33150 system_pods.go:89] "kube-proxy-pkjns" [f97d4b6b-eb80-46c3-9513-ef6e9af5318e] Running
	I0719 18:32:38.931935   33150 system_pods.go:89] "kube-proxy-vr7bz" [cae5da2f-2ee1-4b82-8471-479758fbab4f] Running
	I0719 18:32:38.931939   33150 system_pods.go:89] "kube-scheduler-ha-269108" [07b87e93-0008-46b6-90f2-3e933bb568b0] Running
	I0719 18:32:38.931945   33150 system_pods.go:89] "kube-scheduler-ha-269108-m02" [1dbd98dd-a9b7-464e-9380-d15ee99ab168] Running
	I0719 18:32:38.931949   33150 system_pods.go:89] "kube-vip-ha-269108" [a72f4143-401a-41a3-8adc-4cfc75369797] Running
	I0719 18:32:38.931955   33150 system_pods.go:89] "kube-vip-ha-269108-m02" [5e41e24e-112e-420e-b359-12a6540e9e34] Running
	I0719 18:32:38.931959   33150 system_pods.go:89] "storage-provisioner" [38aaccd6-4fb9-4744-b502-261ebeaf476a] Running
	I0719 18:32:38.931968   33150 system_pods.go:126] duration metric: took 206.748824ms to wait for k8s-apps to be running ...
	I0719 18:32:38.931977   33150 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 18:32:38.932020   33150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 18:32:38.946614   33150 system_svc.go:56] duration metric: took 14.622933ms WaitForService to wait for kubelet
	I0719 18:32:38.946649   33150 kubeadm.go:582] duration metric: took 22.646358973s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 18:32:38.946675   33150 node_conditions.go:102] verifying NodePressure condition ...
	I0719 18:32:39.122087   33150 request.go:629] Waited for 175.330722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes
	I0719 18:32:39.122138   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes
	I0719 18:32:39.122144   33150 round_trippers.go:469] Request Headers:
	I0719 18:32:39.122154   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:32:39.122164   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:32:39.125738   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:32:39.126366   33150 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 18:32:39.126392   33150 node_conditions.go:123] node cpu capacity is 2
	I0719 18:32:39.126402   33150 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 18:32:39.126406   33150 node_conditions.go:123] node cpu capacity is 2
	I0719 18:32:39.126410   33150 node_conditions.go:105] duration metric: took 179.729295ms to run NodePressure ...
	I0719 18:32:39.126423   33150 start.go:241] waiting for startup goroutines ...
	I0719 18:32:39.126447   33150 start.go:255] writing updated cluster config ...
	I0719 18:32:39.128558   33150 out.go:177] 
	I0719 18:32:39.130063   33150 config.go:182] Loaded profile config "ha-269108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:32:39.130158   33150 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/config.json ...
	I0719 18:32:39.131810   33150 out.go:177] * Starting "ha-269108-m03" control-plane node in "ha-269108" cluster
	I0719 18:32:39.132991   33150 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 18:32:39.133009   33150 cache.go:56] Caching tarball of preloaded images
	I0719 18:32:39.133099   33150 preload.go:172] Found /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 18:32:39.133112   33150 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 18:32:39.133207   33150 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/config.json ...
	I0719 18:32:39.133390   33150 start.go:360] acquireMachinesLock for ha-269108-m03: {Name:mk45aeee801587237bb965fa260501e9fac6f233 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 18:32:39.133438   33150 start.go:364] duration metric: took 27.66µs to acquireMachinesLock for "ha-269108-m03"
	I0719 18:32:39.133462   33150 start.go:93] Provisioning new machine with config: &{Name:ha-269108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-269108 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.87 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 18:32:39.133580   33150 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0719 18:32:39.135010   33150 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 18:32:39.135081   33150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:32:39.135110   33150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:32:39.149933   33150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37753
	I0719 18:32:39.150332   33150 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:32:39.150921   33150 main.go:141] libmachine: Using API Version  1
	I0719 18:32:39.150940   33150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:32:39.151262   33150 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:32:39.151462   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetMachineName
	I0719 18:32:39.151612   33150 main.go:141] libmachine: (ha-269108-m03) Calling .DriverName
	I0719 18:32:39.151764   33150 start.go:159] libmachine.API.Create for "ha-269108" (driver="kvm2")
	I0719 18:32:39.151791   33150 client.go:168] LocalClient.Create starting
	I0719 18:32:39.151817   33150 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem
	I0719 18:32:39.151861   33150 main.go:141] libmachine: Decoding PEM data...
	I0719 18:32:39.151876   33150 main.go:141] libmachine: Parsing certificate...
	I0719 18:32:39.151921   33150 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem
	I0719 18:32:39.151938   33150 main.go:141] libmachine: Decoding PEM data...
	I0719 18:32:39.151949   33150 main.go:141] libmachine: Parsing certificate...
	I0719 18:32:39.151968   33150 main.go:141] libmachine: Running pre-create checks...
	I0719 18:32:39.151976   33150 main.go:141] libmachine: (ha-269108-m03) Calling .PreCreateCheck
	I0719 18:32:39.152135   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetConfigRaw
	I0719 18:32:39.152519   33150 main.go:141] libmachine: Creating machine...
	I0719 18:32:39.152532   33150 main.go:141] libmachine: (ha-269108-m03) Calling .Create
	I0719 18:32:39.152733   33150 main.go:141] libmachine: (ha-269108-m03) Creating KVM machine...
	I0719 18:32:39.153913   33150 main.go:141] libmachine: (ha-269108-m03) DBG | found existing default KVM network
	I0719 18:32:39.154118   33150 main.go:141] libmachine: (ha-269108-m03) DBG | found existing private KVM network mk-ha-269108
	I0719 18:32:39.154268   33150 main.go:141] libmachine: (ha-269108-m03) Setting up store path in /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m03 ...
	I0719 18:32:39.154292   33150 main.go:141] libmachine: (ha-269108-m03) Building disk image from file:///home/jenkins/minikube-integration/19307-14841/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 18:32:39.154364   33150 main.go:141] libmachine: (ha-269108-m03) DBG | I0719 18:32:39.154269   33893 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 18:32:39.154434   33150 main.go:141] libmachine: (ha-269108-m03) Downloading /home/jenkins/minikube-integration/19307-14841/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19307-14841/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 18:32:39.373669   33150 main.go:141] libmachine: (ha-269108-m03) DBG | I0719 18:32:39.373525   33893 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m03/id_rsa...
	I0719 18:32:39.495402   33150 main.go:141] libmachine: (ha-269108-m03) DBG | I0719 18:32:39.495267   33893 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m03/ha-269108-m03.rawdisk...
	I0719 18:32:39.495432   33150 main.go:141] libmachine: (ha-269108-m03) DBG | Writing magic tar header
	I0719 18:32:39.495446   33150 main.go:141] libmachine: (ha-269108-m03) DBG | Writing SSH key tar header
	I0719 18:32:39.495459   33150 main.go:141] libmachine: (ha-269108-m03) DBG | I0719 18:32:39.495387   33893 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m03 ...
	I0719 18:32:39.495475   33150 main.go:141] libmachine: (ha-269108-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m03
	I0719 18:32:39.495573   33150 main.go:141] libmachine: (ha-269108-m03) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m03 (perms=drwx------)
	I0719 18:32:39.495595   33150 main.go:141] libmachine: (ha-269108-m03) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841/.minikube/machines (perms=drwxr-xr-x)
	I0719 18:32:39.495606   33150 main.go:141] libmachine: (ha-269108-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841/.minikube/machines
	I0719 18:32:39.495636   33150 main.go:141] libmachine: (ha-269108-m03) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841/.minikube (perms=drwxr-xr-x)
	I0719 18:32:39.495655   33150 main.go:141] libmachine: (ha-269108-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 18:32:39.495674   33150 main.go:141] libmachine: (ha-269108-m03) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841 (perms=drwxrwxr-x)
	I0719 18:32:39.495691   33150 main.go:141] libmachine: (ha-269108-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0719 18:32:39.495702   33150 main.go:141] libmachine: (ha-269108-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0719 18:32:39.495713   33150 main.go:141] libmachine: (ha-269108-m03) Creating domain...
	I0719 18:32:39.495731   33150 main.go:141] libmachine: (ha-269108-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841
	I0719 18:32:39.495744   33150 main.go:141] libmachine: (ha-269108-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0719 18:32:39.495758   33150 main.go:141] libmachine: (ha-269108-m03) DBG | Checking permissions on dir: /home/jenkins
	I0719 18:32:39.495770   33150 main.go:141] libmachine: (ha-269108-m03) DBG | Checking permissions on dir: /home
	I0719 18:32:39.495783   33150 main.go:141] libmachine: (ha-269108-m03) DBG | Skipping /home - not owner
	I0719 18:32:39.496629   33150 main.go:141] libmachine: (ha-269108-m03) define libvirt domain using xml: 
	I0719 18:32:39.496652   33150 main.go:141] libmachine: (ha-269108-m03) <domain type='kvm'>
	I0719 18:32:39.496682   33150 main.go:141] libmachine: (ha-269108-m03)   <name>ha-269108-m03</name>
	I0719 18:32:39.496705   33150 main.go:141] libmachine: (ha-269108-m03)   <memory unit='MiB'>2200</memory>
	I0719 18:32:39.496715   33150 main.go:141] libmachine: (ha-269108-m03)   <vcpu>2</vcpu>
	I0719 18:32:39.496727   33150 main.go:141] libmachine: (ha-269108-m03)   <features>
	I0719 18:32:39.496739   33150 main.go:141] libmachine: (ha-269108-m03)     <acpi/>
	I0719 18:32:39.496746   33150 main.go:141] libmachine: (ha-269108-m03)     <apic/>
	I0719 18:32:39.496755   33150 main.go:141] libmachine: (ha-269108-m03)     <pae/>
	I0719 18:32:39.496763   33150 main.go:141] libmachine: (ha-269108-m03)     
	I0719 18:32:39.496768   33150 main.go:141] libmachine: (ha-269108-m03)   </features>
	I0719 18:32:39.496775   33150 main.go:141] libmachine: (ha-269108-m03)   <cpu mode='host-passthrough'>
	I0719 18:32:39.496781   33150 main.go:141] libmachine: (ha-269108-m03)   
	I0719 18:32:39.496790   33150 main.go:141] libmachine: (ha-269108-m03)   </cpu>
	I0719 18:32:39.496798   33150 main.go:141] libmachine: (ha-269108-m03)   <os>
	I0719 18:32:39.496812   33150 main.go:141] libmachine: (ha-269108-m03)     <type>hvm</type>
	I0719 18:32:39.496824   33150 main.go:141] libmachine: (ha-269108-m03)     <boot dev='cdrom'/>
	I0719 18:32:39.496834   33150 main.go:141] libmachine: (ha-269108-m03)     <boot dev='hd'/>
	I0719 18:32:39.496845   33150 main.go:141] libmachine: (ha-269108-m03)     <bootmenu enable='no'/>
	I0719 18:32:39.496855   33150 main.go:141] libmachine: (ha-269108-m03)   </os>
	I0719 18:32:39.496862   33150 main.go:141] libmachine: (ha-269108-m03)   <devices>
	I0719 18:32:39.496875   33150 main.go:141] libmachine: (ha-269108-m03)     <disk type='file' device='cdrom'>
	I0719 18:32:39.496900   33150 main.go:141] libmachine: (ha-269108-m03)       <source file='/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m03/boot2docker.iso'/>
	I0719 18:32:39.496918   33150 main.go:141] libmachine: (ha-269108-m03)       <target dev='hdc' bus='scsi'/>
	I0719 18:32:39.496930   33150 main.go:141] libmachine: (ha-269108-m03)       <readonly/>
	I0719 18:32:39.496939   33150 main.go:141] libmachine: (ha-269108-m03)     </disk>
	I0719 18:32:39.496948   33150 main.go:141] libmachine: (ha-269108-m03)     <disk type='file' device='disk'>
	I0719 18:32:39.496956   33150 main.go:141] libmachine: (ha-269108-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0719 18:32:39.496964   33150 main.go:141] libmachine: (ha-269108-m03)       <source file='/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m03/ha-269108-m03.rawdisk'/>
	I0719 18:32:39.496971   33150 main.go:141] libmachine: (ha-269108-m03)       <target dev='hda' bus='virtio'/>
	I0719 18:32:39.496977   33150 main.go:141] libmachine: (ha-269108-m03)     </disk>
	I0719 18:32:39.496984   33150 main.go:141] libmachine: (ha-269108-m03)     <interface type='network'>
	I0719 18:32:39.496991   33150 main.go:141] libmachine: (ha-269108-m03)       <source network='mk-ha-269108'/>
	I0719 18:32:39.497001   33150 main.go:141] libmachine: (ha-269108-m03)       <model type='virtio'/>
	I0719 18:32:39.497025   33150 main.go:141] libmachine: (ha-269108-m03)     </interface>
	I0719 18:32:39.497042   33150 main.go:141] libmachine: (ha-269108-m03)     <interface type='network'>
	I0719 18:32:39.497058   33150 main.go:141] libmachine: (ha-269108-m03)       <source network='default'/>
	I0719 18:32:39.497069   33150 main.go:141] libmachine: (ha-269108-m03)       <model type='virtio'/>
	I0719 18:32:39.497083   33150 main.go:141] libmachine: (ha-269108-m03)     </interface>
	I0719 18:32:39.497093   33150 main.go:141] libmachine: (ha-269108-m03)     <serial type='pty'>
	I0719 18:32:39.497105   33150 main.go:141] libmachine: (ha-269108-m03)       <target port='0'/>
	I0719 18:32:39.497116   33150 main.go:141] libmachine: (ha-269108-m03)     </serial>
	I0719 18:32:39.497127   33150 main.go:141] libmachine: (ha-269108-m03)     <console type='pty'>
	I0719 18:32:39.497135   33150 main.go:141] libmachine: (ha-269108-m03)       <target type='serial' port='0'/>
	I0719 18:32:39.497147   33150 main.go:141] libmachine: (ha-269108-m03)     </console>
	I0719 18:32:39.497157   33150 main.go:141] libmachine: (ha-269108-m03)     <rng model='virtio'>
	I0719 18:32:39.497171   33150 main.go:141] libmachine: (ha-269108-m03)       <backend model='random'>/dev/random</backend>
	I0719 18:32:39.497180   33150 main.go:141] libmachine: (ha-269108-m03)     </rng>
	I0719 18:32:39.497193   33150 main.go:141] libmachine: (ha-269108-m03)     
	I0719 18:32:39.497200   33150 main.go:141] libmachine: (ha-269108-m03)     
	I0719 18:32:39.497205   33150 main.go:141] libmachine: (ha-269108-m03)   </devices>
	I0719 18:32:39.497214   33150 main.go:141] libmachine: (ha-269108-m03) </domain>
	I0719 18:32:39.497227   33150 main.go:141] libmachine: (ha-269108-m03) 
	I0719 18:32:39.503657   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:5b:75:d6 in network default
	I0719 18:32:39.504300   33150 main.go:141] libmachine: (ha-269108-m03) Ensuring networks are active...
	I0719 18:32:39.504323   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:32:39.505005   33150 main.go:141] libmachine: (ha-269108-m03) Ensuring network default is active
	I0719 18:32:39.505341   33150 main.go:141] libmachine: (ha-269108-m03) Ensuring network mk-ha-269108 is active
	I0719 18:32:39.505684   33150 main.go:141] libmachine: (ha-269108-m03) Getting domain xml...
	I0719 18:32:39.506378   33150 main.go:141] libmachine: (ha-269108-m03) Creating domain...
	I0719 18:32:40.725450   33150 main.go:141] libmachine: (ha-269108-m03) Waiting to get IP...
	I0719 18:32:40.726366   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:32:40.726797   33150 main.go:141] libmachine: (ha-269108-m03) DBG | unable to find current IP address of domain ha-269108-m03 in network mk-ha-269108
	I0719 18:32:40.726840   33150 main.go:141] libmachine: (ha-269108-m03) DBG | I0719 18:32:40.726783   33893 retry.go:31] will retry after 210.192396ms: waiting for machine to come up
	I0719 18:32:40.938058   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:32:40.938443   33150 main.go:141] libmachine: (ha-269108-m03) DBG | unable to find current IP address of domain ha-269108-m03 in network mk-ha-269108
	I0719 18:32:40.938469   33150 main.go:141] libmachine: (ha-269108-m03) DBG | I0719 18:32:40.938388   33893 retry.go:31] will retry after 374.294477ms: waiting for machine to come up
	I0719 18:32:41.313929   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:32:41.314394   33150 main.go:141] libmachine: (ha-269108-m03) DBG | unable to find current IP address of domain ha-269108-m03 in network mk-ha-269108
	I0719 18:32:41.314423   33150 main.go:141] libmachine: (ha-269108-m03) DBG | I0719 18:32:41.314330   33893 retry.go:31] will retry after 397.700343ms: waiting for machine to come up
	I0719 18:32:41.713844   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:32:41.714224   33150 main.go:141] libmachine: (ha-269108-m03) DBG | unable to find current IP address of domain ha-269108-m03 in network mk-ha-269108
	I0719 18:32:41.714247   33150 main.go:141] libmachine: (ha-269108-m03) DBG | I0719 18:32:41.714184   33893 retry.go:31] will retry after 460.580971ms: waiting for machine to come up
	I0719 18:32:42.176610   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:32:42.177142   33150 main.go:141] libmachine: (ha-269108-m03) DBG | unable to find current IP address of domain ha-269108-m03 in network mk-ha-269108
	I0719 18:32:42.177171   33150 main.go:141] libmachine: (ha-269108-m03) DBG | I0719 18:32:42.177087   33893 retry.go:31] will retry after 646.320778ms: waiting for machine to come up
	I0719 18:32:42.824988   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:32:42.825465   33150 main.go:141] libmachine: (ha-269108-m03) DBG | unable to find current IP address of domain ha-269108-m03 in network mk-ha-269108
	I0719 18:32:42.825491   33150 main.go:141] libmachine: (ha-269108-m03) DBG | I0719 18:32:42.825413   33893 retry.go:31] will retry after 892.756613ms: waiting for machine to come up
	I0719 18:32:43.720467   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:32:43.720914   33150 main.go:141] libmachine: (ha-269108-m03) DBG | unable to find current IP address of domain ha-269108-m03 in network mk-ha-269108
	I0719 18:32:43.720940   33150 main.go:141] libmachine: (ha-269108-m03) DBG | I0719 18:32:43.720863   33893 retry.go:31] will retry after 1.152338271s: waiting for machine to come up
	I0719 18:32:44.875127   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:32:44.875578   33150 main.go:141] libmachine: (ha-269108-m03) DBG | unable to find current IP address of domain ha-269108-m03 in network mk-ha-269108
	I0719 18:32:44.875598   33150 main.go:141] libmachine: (ha-269108-m03) DBG | I0719 18:32:44.875543   33893 retry.go:31] will retry after 1.065599322s: waiting for machine to come up
	I0719 18:32:45.942620   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:32:45.942976   33150 main.go:141] libmachine: (ha-269108-m03) DBG | unable to find current IP address of domain ha-269108-m03 in network mk-ha-269108
	I0719 18:32:45.943004   33150 main.go:141] libmachine: (ha-269108-m03) DBG | I0719 18:32:45.942938   33893 retry.go:31] will retry after 1.77360277s: waiting for machine to come up
	I0719 18:32:47.718554   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:32:47.719068   33150 main.go:141] libmachine: (ha-269108-m03) DBG | unable to find current IP address of domain ha-269108-m03 in network mk-ha-269108
	I0719 18:32:47.719105   33150 main.go:141] libmachine: (ha-269108-m03) DBG | I0719 18:32:47.719009   33893 retry.go:31] will retry after 2.016398592s: waiting for machine to come up
	I0719 18:32:49.737314   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:32:49.737793   33150 main.go:141] libmachine: (ha-269108-m03) DBG | unable to find current IP address of domain ha-269108-m03 in network mk-ha-269108
	I0719 18:32:49.737824   33150 main.go:141] libmachine: (ha-269108-m03) DBG | I0719 18:32:49.737756   33893 retry.go:31] will retry after 1.850110184s: waiting for machine to come up
	I0719 18:32:51.589604   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:32:51.589975   33150 main.go:141] libmachine: (ha-269108-m03) DBG | unable to find current IP address of domain ha-269108-m03 in network mk-ha-269108
	I0719 18:32:51.590004   33150 main.go:141] libmachine: (ha-269108-m03) DBG | I0719 18:32:51.589928   33893 retry.go:31] will retry after 2.429018042s: waiting for machine to come up
	I0719 18:32:54.020850   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:32:54.021367   33150 main.go:141] libmachine: (ha-269108-m03) DBG | unable to find current IP address of domain ha-269108-m03 in network mk-ha-269108
	I0719 18:32:54.021397   33150 main.go:141] libmachine: (ha-269108-m03) DBG | I0719 18:32:54.021304   33893 retry.go:31] will retry after 3.578950672s: waiting for machine to come up
	I0719 18:32:57.601544   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:32:57.601964   33150 main.go:141] libmachine: (ha-269108-m03) DBG | unable to find current IP address of domain ha-269108-m03 in network mk-ha-269108
	I0719 18:32:57.601991   33150 main.go:141] libmachine: (ha-269108-m03) DBG | I0719 18:32:57.601920   33893 retry.go:31] will retry after 5.362333357s: waiting for machine to come up
	I0719 18:33:02.967092   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:02.967642   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has current primary IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:02.967669   33150 main.go:141] libmachine: (ha-269108-m03) Found IP for machine: 192.168.39.111
	I0719 18:33:02.967684   33150 main.go:141] libmachine: (ha-269108-m03) Reserving static IP address...
	I0719 18:33:02.968141   33150 main.go:141] libmachine: (ha-269108-m03) DBG | unable to find host DHCP lease matching {name: "ha-269108-m03", mac: "52:54:00:d3:60:8c", ip: "192.168.39.111"} in network mk-ha-269108
	I0719 18:33:03.039511   33150 main.go:141] libmachine: (ha-269108-m03) DBG | Getting to WaitForSSH function...
	I0719 18:33:03.039540   33150 main.go:141] libmachine: (ha-269108-m03) Reserved static IP address: 192.168.39.111
	I0719 18:33:03.039553   33150 main.go:141] libmachine: (ha-269108-m03) Waiting for SSH to be available...
	I0719 18:33:03.042009   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:03.042490   33150 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d3:60:8c}
	I0719 18:33:03.042520   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:03.042653   33150 main.go:141] libmachine: (ha-269108-m03) DBG | Using SSH client type: external
	I0719 18:33:03.042665   33150 main.go:141] libmachine: (ha-269108-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m03/id_rsa (-rw-------)
	I0719 18:33:03.042724   33150 main.go:141] libmachine: (ha-269108-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 18:33:03.042742   33150 main.go:141] libmachine: (ha-269108-m03) DBG | About to run SSH command:
	I0719 18:33:03.042757   33150 main.go:141] libmachine: (ha-269108-m03) DBG | exit 0
	I0719 18:33:03.175598   33150 main.go:141] libmachine: (ha-269108-m03) DBG | SSH cmd err, output: <nil>: 
	I0719 18:33:03.175866   33150 main.go:141] libmachine: (ha-269108-m03) KVM machine creation complete!
	I0719 18:33:03.176232   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetConfigRaw
	I0719 18:33:03.176754   33150 main.go:141] libmachine: (ha-269108-m03) Calling .DriverName
	I0719 18:33:03.176971   33150 main.go:141] libmachine: (ha-269108-m03) Calling .DriverName
	I0719 18:33:03.177153   33150 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0719 18:33:03.177166   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetState
	I0719 18:33:03.178434   33150 main.go:141] libmachine: Detecting operating system of created instance...
	I0719 18:33:03.178447   33150 main.go:141] libmachine: Waiting for SSH to be available...
	I0719 18:33:03.178452   33150 main.go:141] libmachine: Getting to WaitForSSH function...
	I0719 18:33:03.178462   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHHostname
	I0719 18:33:03.180852   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:03.181231   33150 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:33:03.181254   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:03.181417   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHPort
	I0719 18:33:03.181594   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:33:03.181775   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:33:03.181920   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHUsername
	I0719 18:33:03.182086   33150 main.go:141] libmachine: Using SSH client type: native
	I0719 18:33:03.182287   33150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0719 18:33:03.182299   33150 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0719 18:33:03.297763   33150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 18:33:03.297788   33150 main.go:141] libmachine: Detecting the provisioner...
	I0719 18:33:03.297799   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHHostname
	I0719 18:33:03.300652   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:03.301069   33150 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:33:03.301101   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:03.301293   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHPort
	I0719 18:33:03.301511   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:33:03.301648   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:33:03.301795   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHUsername
	I0719 18:33:03.301966   33150 main.go:141] libmachine: Using SSH client type: native
	I0719 18:33:03.302173   33150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0719 18:33:03.302189   33150 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0719 18:33:03.412178   33150 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0719 18:33:03.412241   33150 main.go:141] libmachine: found compatible host: buildroot
	I0719 18:33:03.412248   33150 main.go:141] libmachine: Provisioning with buildroot...
	I0719 18:33:03.412256   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetMachineName
	I0719 18:33:03.412542   33150 buildroot.go:166] provisioning hostname "ha-269108-m03"
	I0719 18:33:03.412566   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetMachineName
	I0719 18:33:03.412728   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHHostname
	I0719 18:33:03.415456   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:03.415729   33150 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:33:03.415757   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:03.415891   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHPort
	I0719 18:33:03.416079   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:33:03.416234   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:33:03.416379   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHUsername
	I0719 18:33:03.416555   33150 main.go:141] libmachine: Using SSH client type: native
	I0719 18:33:03.416741   33150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0719 18:33:03.416759   33150 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-269108-m03 && echo "ha-269108-m03" | sudo tee /etc/hostname
	I0719 18:33:03.542883   33150 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-269108-m03
	
	I0719 18:33:03.542916   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHHostname
	I0719 18:33:03.545487   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:03.545794   33150 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:33:03.545820   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:03.545931   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHPort
	I0719 18:33:03.546125   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:33:03.546279   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:33:03.546403   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHUsername
	I0719 18:33:03.546569   33150 main.go:141] libmachine: Using SSH client type: native
	I0719 18:33:03.546758   33150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0719 18:33:03.546775   33150 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-269108-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-269108-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-269108-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 18:33:03.668542   33150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 18:33:03.668568   33150 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 18:33:03.668584   33150 buildroot.go:174] setting up certificates
	I0719 18:33:03.668594   33150 provision.go:84] configureAuth start
	I0719 18:33:03.668603   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetMachineName
	I0719 18:33:03.668874   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetIP
	I0719 18:33:03.671519   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:03.671869   33150 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:33:03.671892   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:03.672011   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHHostname
	I0719 18:33:03.674260   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:03.674643   33150 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:33:03.674668   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:03.674791   33150 provision.go:143] copyHostCerts
	I0719 18:33:03.674824   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 18:33:03.674858   33150 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 18:33:03.674869   33150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 18:33:03.674934   33150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 18:33:03.675009   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 18:33:03.675034   33150 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 18:33:03.675040   33150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 18:33:03.675064   33150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 18:33:03.675124   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 18:33:03.675139   33150 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 18:33:03.675145   33150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 18:33:03.675165   33150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 18:33:03.675228   33150 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.ha-269108-m03 san=[127.0.0.1 192.168.39.111 ha-269108-m03 localhost minikube]
	I0719 18:33:03.794983   33150 provision.go:177] copyRemoteCerts
	I0719 18:33:03.795034   33150 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 18:33:03.795056   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHHostname
	I0719 18:33:03.797426   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:03.797765   33150 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:33:03.797792   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:03.798012   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHPort
	I0719 18:33:03.798240   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:33:03.798517   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHUsername
	I0719 18:33:03.798687   33150 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m03/id_rsa Username:docker}
	I0719 18:33:03.885886   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0719 18:33:03.885965   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 18:33:03.907808   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0719 18:33:03.907891   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0719 18:33:03.929652   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0719 18:33:03.929717   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 18:33:03.951378   33150 provision.go:87] duration metric: took 282.771798ms to configureAuth
	I0719 18:33:03.951404   33150 buildroot.go:189] setting minikube options for container-runtime
	I0719 18:33:03.951637   33150 config.go:182] Loaded profile config "ha-269108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:33:03.951725   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHHostname
	I0719 18:33:03.954343   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:03.954703   33150 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:33:03.954731   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:03.954897   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHPort
	I0719 18:33:03.955082   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:33:03.955240   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:33:03.955336   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHUsername
	I0719 18:33:03.955503   33150 main.go:141] libmachine: Using SSH client type: native
	I0719 18:33:03.955700   33150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0719 18:33:03.955723   33150 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 18:33:04.226707   33150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 18:33:04.226729   33150 main.go:141] libmachine: Checking connection to Docker...
	I0719 18:33:04.226737   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetURL
	I0719 18:33:04.227969   33150 main.go:141] libmachine: (ha-269108-m03) DBG | Using libvirt version 6000000
	I0719 18:33:04.230350   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:04.230731   33150 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:33:04.230759   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:04.230938   33150 main.go:141] libmachine: Docker is up and running!
	I0719 18:33:04.230953   33150 main.go:141] libmachine: Reticulating splines...
	I0719 18:33:04.230959   33150 client.go:171] duration metric: took 25.079162097s to LocalClient.Create
	I0719 18:33:04.230978   33150 start.go:167] duration metric: took 25.079214807s to libmachine.API.Create "ha-269108"
	I0719 18:33:04.230987   33150 start.go:293] postStartSetup for "ha-269108-m03" (driver="kvm2")
	I0719 18:33:04.230996   33150 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 18:33:04.231010   33150 main.go:141] libmachine: (ha-269108-m03) Calling .DriverName
	I0719 18:33:04.231213   33150 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 18:33:04.231239   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHHostname
	I0719 18:33:04.233590   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:04.233972   33150 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:33:04.234001   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:04.234145   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHPort
	I0719 18:33:04.234310   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:33:04.234474   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHUsername
	I0719 18:33:04.234618   33150 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m03/id_rsa Username:docker}
	I0719 18:33:04.323073   33150 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 18:33:04.327041   33150 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 18:33:04.327064   33150 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 18:33:04.327123   33150 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 18:33:04.327196   33150 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 18:33:04.327205   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> /etc/ssl/certs/220282.pem
	I0719 18:33:04.327279   33150 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 18:33:04.335591   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 18:33:04.356906   33150 start.go:296] duration metric: took 125.907474ms for postStartSetup
	I0719 18:33:04.356959   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetConfigRaw
	I0719 18:33:04.357532   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetIP
	I0719 18:33:04.360358   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:04.360899   33150 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:33:04.360925   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:04.361203   33150 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/config.json ...
	I0719 18:33:04.361379   33150 start.go:128] duration metric: took 25.227789706s to createHost
	I0719 18:33:04.361401   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHHostname
	I0719 18:33:04.363499   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:04.363825   33150 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:33:04.363859   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:04.364009   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHPort
	I0719 18:33:04.364178   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:33:04.364335   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:33:04.364528   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHUsername
	I0719 18:33:04.364688   33150 main.go:141] libmachine: Using SSH client type: native
	I0719 18:33:04.364846   33150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0719 18:33:04.364855   33150 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 18:33:04.476394   33150 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721413984.456165797
	
	I0719 18:33:04.476418   33150 fix.go:216] guest clock: 1721413984.456165797
	I0719 18:33:04.476428   33150 fix.go:229] Guest: 2024-07-19 18:33:04.456165797 +0000 UTC Remote: 2024-07-19 18:33:04.361391314 +0000 UTC m=+152.974396235 (delta=94.774483ms)
	I0719 18:33:04.476449   33150 fix.go:200] guest clock delta is within tolerance: 94.774483ms
	I0719 18:33:04.476455   33150 start.go:83] releasing machines lock for "ha-269108-m03", held for 25.343006363s
	I0719 18:33:04.476480   33150 main.go:141] libmachine: (ha-269108-m03) Calling .DriverName
	I0719 18:33:04.476742   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetIP
	I0719 18:33:04.479410   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:04.479741   33150 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:33:04.479765   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:04.482154   33150 out.go:177] * Found network options:
	I0719 18:33:04.483606   33150 out.go:177]   - NO_PROXY=192.168.39.163,192.168.39.87
	W0719 18:33:04.484793   33150 proxy.go:119] fail to check proxy env: Error ip not in block
	W0719 18:33:04.484814   33150 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 18:33:04.484831   33150 main.go:141] libmachine: (ha-269108-m03) Calling .DriverName
	I0719 18:33:04.485362   33150 main.go:141] libmachine: (ha-269108-m03) Calling .DriverName
	I0719 18:33:04.485576   33150 main.go:141] libmachine: (ha-269108-m03) Calling .DriverName
	I0719 18:33:04.485679   33150 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 18:33:04.485716   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHHostname
	W0719 18:33:04.485802   33150 proxy.go:119] fail to check proxy env: Error ip not in block
	W0719 18:33:04.485823   33150 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 18:33:04.485882   33150 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 18:33:04.485903   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHHostname
	I0719 18:33:04.488175   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:04.488477   33150 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:33:04.488504   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:04.488611   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHPort
	I0719 18:33:04.488637   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:04.488783   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:33:04.488965   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHUsername
	I0719 18:33:04.489013   33150 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:33:04.489040   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:04.489122   33150 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m03/id_rsa Username:docker}
	I0719 18:33:04.489181   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHPort
	I0719 18:33:04.489310   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:33:04.489443   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHUsername
	I0719 18:33:04.489576   33150 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m03/id_rsa Username:docker}
	I0719 18:33:04.722278   33150 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 18:33:04.728160   33150 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 18:33:04.728246   33150 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 18:33:04.745040   33150 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 18:33:04.745066   33150 start.go:495] detecting cgroup driver to use...
	I0719 18:33:04.745142   33150 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 18:33:04.760830   33150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 18:33:04.774261   33150 docker.go:217] disabling cri-docker service (if available) ...
	I0719 18:33:04.774322   33150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 18:33:04.787354   33150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 18:33:04.800627   33150 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 18:33:04.905302   33150 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 18:33:05.049685   33150 docker.go:233] disabling docker service ...
	I0719 18:33:05.049741   33150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 18:33:05.064518   33150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 18:33:05.078363   33150 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 18:33:05.224145   33150 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 18:33:05.351293   33150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 18:33:05.364081   33150 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 18:33:05.381230   33150 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 18:33:05.381280   33150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:33:05.390574   33150 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 18:33:05.390640   33150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:33:05.400221   33150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:33:05.410885   33150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:33:05.421484   33150 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 18:33:05.432646   33150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:33:05.442273   33150 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:33:05.459170   33150 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:33:05.469148   33150 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 18:33:05.479572   33150 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 18:33:05.479632   33150 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 18:33:05.491280   33150 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 18:33:05.500464   33150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 18:33:05.626289   33150 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 18:33:05.762237   33150 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 18:33:05.762315   33150 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 18:33:05.766673   33150 start.go:563] Will wait 60s for crictl version
	I0719 18:33:05.766727   33150 ssh_runner.go:195] Run: which crictl
	I0719 18:33:05.770091   33150 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 18:33:05.807287   33150 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 18:33:05.807371   33150 ssh_runner.go:195] Run: crio --version
	I0719 18:33:05.833345   33150 ssh_runner.go:195] Run: crio --version
	I0719 18:33:05.867549   33150 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 18:33:05.868994   33150 out.go:177]   - env NO_PROXY=192.168.39.163
	I0719 18:33:05.870387   33150 out.go:177]   - env NO_PROXY=192.168.39.163,192.168.39.87
	I0719 18:33:05.871489   33150 main.go:141] libmachine: (ha-269108-m03) Calling .GetIP
	I0719 18:33:05.873823   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:05.874207   33150 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:33:05.874236   33150 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:33:05.874426   33150 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 18:33:05.878106   33150 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 18:33:05.889796   33150 mustload.go:65] Loading cluster: ha-269108
	I0719 18:33:05.890007   33150 config.go:182] Loaded profile config "ha-269108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:33:05.890259   33150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:33:05.890295   33150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:33:05.905200   33150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42299
	I0719 18:33:05.905604   33150 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:33:05.906025   33150 main.go:141] libmachine: Using API Version  1
	I0719 18:33:05.906050   33150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:33:05.906309   33150 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:33:05.906525   33150 main.go:141] libmachine: (ha-269108) Calling .GetState
	I0719 18:33:05.907985   33150 host.go:66] Checking if "ha-269108" exists ...
	I0719 18:33:05.908303   33150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:33:05.908341   33150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:33:05.922260   33150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41839
	I0719 18:33:05.922599   33150 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:33:05.923041   33150 main.go:141] libmachine: Using API Version  1
	I0719 18:33:05.923059   33150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:33:05.923332   33150 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:33:05.923510   33150 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:33:05.923633   33150 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108 for IP: 192.168.39.111
	I0719 18:33:05.923644   33150 certs.go:194] generating shared ca certs ...
	I0719 18:33:05.923662   33150 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:33:05.923794   33150 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 18:33:05.923864   33150 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 18:33:05.923878   33150 certs.go:256] generating profile certs ...
	I0719 18:33:05.923963   33150 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/client.key
	I0719 18:33:05.923993   33150 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key.d20dd27e
	I0719 18:33:05.924014   33150 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt.d20dd27e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.163 192.168.39.87 192.168.39.111 192.168.39.254]
	I0719 18:33:06.025750   33150 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt.d20dd27e ...
	I0719 18:33:06.025786   33150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt.d20dd27e: {Name:mk51ba19514db8b85ef9b3173ef1e66f87760fef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:33:06.025973   33150 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key.d20dd27e ...
	I0719 18:33:06.025990   33150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key.d20dd27e: {Name:mkc4cce785486f459321a7d862eda10c6eda9616 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:33:06.026088   33150 certs.go:381] copying /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt.d20dd27e -> /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt
	I0719 18:33:06.026237   33150 certs.go:385] copying /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key.d20dd27e -> /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key
	I0719 18:33:06.026403   33150 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.key
	I0719 18:33:06.026419   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 18:33:06.026437   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0719 18:33:06.026453   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 18:33:06.026471   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 18:33:06.026488   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 18:33:06.026506   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 18:33:06.026523   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 18:33:06.026547   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 18:33:06.026622   33150 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 18:33:06.026665   33150 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 18:33:06.026678   33150 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 18:33:06.026711   33150 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 18:33:06.026741   33150 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 18:33:06.026770   33150 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 18:33:06.026828   33150 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 18:33:06.026864   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> /usr/share/ca-certificates/220282.pem
	I0719 18:33:06.026883   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 18:33:06.026900   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem -> /usr/share/ca-certificates/22028.pem
	I0719 18:33:06.026938   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:33:06.029911   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:33:06.030330   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:33:06.030350   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:33:06.030598   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:33:06.030793   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:33:06.030952   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:33:06.031142   33150 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	I0719 18:33:06.104148   33150 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0719 18:33:06.109724   33150 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0719 18:33:06.121831   33150 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0719 18:33:06.126025   33150 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0719 18:33:06.138562   33150 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0719 18:33:06.143389   33150 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0719 18:33:06.154473   33150 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0719 18:33:06.159023   33150 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0719 18:33:06.171644   33150 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0719 18:33:06.175264   33150 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0719 18:33:06.184464   33150 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0719 18:33:06.188359   33150 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0719 18:33:06.198566   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 18:33:06.221966   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 18:33:06.243388   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 18:33:06.265285   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 18:33:06.289054   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0719 18:33:06.312452   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 18:33:06.336995   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 18:33:06.361263   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 18:33:06.384792   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 18:33:06.407244   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 18:33:06.430957   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 18:33:06.455042   33150 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0719 18:33:06.471039   33150 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0719 18:33:06.487363   33150 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0719 18:33:06.508221   33150 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0719 18:33:06.522920   33150 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0719 18:33:06.538405   33150 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0719 18:33:06.553152   33150 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0719 18:33:06.569197   33150 ssh_runner.go:195] Run: openssl version
	I0719 18:33:06.574663   33150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 18:33:06.584356   33150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 18:33:06.588528   33150 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 18:33:06.588575   33150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 18:33:06.594073   33150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 18:33:06.603847   33150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 18:33:06.613253   33150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 18:33:06.617081   33150 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 18:33:06.617131   33150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 18:33:06.622068   33150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 18:33:06.631532   33150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 18:33:06.640869   33150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 18:33:06.644888   33150 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 18:33:06.644941   33150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 18:33:06.650230   33150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 18:33:06.661150   33150 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 18:33:06.664801   33150 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 18:33:06.664856   33150 kubeadm.go:934] updating node {m03 192.168.39.111 8443 v1.30.3 crio true true} ...
	I0719 18:33:06.664953   33150 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-269108-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-269108 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 18:33:06.664983   33150 kube-vip.go:115] generating kube-vip config ...
	I0719 18:33:06.665020   33150 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0719 18:33:06.679864   33150 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0719 18:33:06.679914   33150 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0719 18:33:06.679958   33150 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 18:33:06.690356   33150 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0719 18:33:06.690402   33150 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0719 18:33:06.700762   33150 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0719 18:33:06.700781   33150 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0719 18:33:06.700800   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0719 18:33:06.700811   33150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 18:33:06.700855   33150 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0719 18:33:06.700764   33150 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0719 18:33:06.700894   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0719 18:33:06.700964   33150 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0719 18:33:06.717579   33150 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0719 18:33:06.717608   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0719 18:33:06.717612   33150 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0719 18:33:06.717656   33150 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0719 18:33:06.717678   33150 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0719 18:33:06.717681   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0719 18:33:06.752029   33150 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0719 18:33:06.752070   33150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0719 18:33:07.586359   33150 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0719 18:33:07.595678   33150 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0719 18:33:07.611590   33150 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 18:33:07.626485   33150 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0719 18:33:07.641336   33150 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0719 18:33:07.645172   33150 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 18:33:07.656486   33150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 18:33:07.768930   33150 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 18:33:07.786944   33150 host.go:66] Checking if "ha-269108" exists ...
	I0719 18:33:07.787499   33150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:33:07.787553   33150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:33:07.803903   33150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40453
	I0719 18:33:07.804403   33150 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:33:07.804935   33150 main.go:141] libmachine: Using API Version  1
	I0719 18:33:07.804959   33150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:33:07.805322   33150 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:33:07.805505   33150 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:33:07.805656   33150 start.go:317] joinCluster: &{Name:ha-269108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-269108 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.87 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 18:33:07.805772   33150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0719 18:33:07.805794   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:33:07.809326   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:33:07.809800   33150 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:33:07.809827   33150 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:33:07.810000   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:33:07.810184   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:33:07.810390   33150 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:33:07.810542   33150 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	I0719 18:33:07.954991   33150 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 18:33:07.955046   33150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qu2999.8ci5v73iktlw95nz --discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-269108-m03 --control-plane --apiserver-advertise-address=192.168.39.111 --apiserver-bind-port=8443"
	I0719 18:33:32.414858   33150 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qu2999.8ci5v73iktlw95nz --discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-269108-m03 --control-plane --apiserver-advertise-address=192.168.39.111 --apiserver-bind-port=8443": (24.45978558s)
	I0719 18:33:32.414893   33150 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0719 18:33:32.915184   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-269108-m03 minikube.k8s.io/updated_at=2024_07_19T18_33_32_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221 minikube.k8s.io/name=ha-269108 minikube.k8s.io/primary=false
	I0719 18:33:33.044435   33150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-269108-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0719 18:33:33.172662   33150 start.go:319] duration metric: took 25.367000212s to joinCluster
	I0719 18:33:33.172744   33150 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 18:33:33.173085   33150 config.go:182] Loaded profile config "ha-269108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:33:33.174347   33150 out.go:177] * Verifying Kubernetes components...
	I0719 18:33:33.175666   33150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 18:33:33.367744   33150 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 18:33:33.384358   33150 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 18:33:33.384642   33150 kapi.go:59] client config for ha-269108: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/client.crt", KeyFile:"/home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/client.key", CAFile:"/home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0719 18:33:33.384704   33150 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.163:8443
	I0719 18:33:33.384872   33150 node_ready.go:35] waiting up to 6m0s for node "ha-269108-m03" to be "Ready" ...
	I0719 18:33:33.384931   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:33.384937   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:33.384944   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:33.384949   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:33.388619   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:33.885896   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:33.885922   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:33.885934   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:33.885940   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:33.889364   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:34.385648   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:34.385668   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:34.385676   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:34.385680   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:34.389540   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:34.886044   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:34.886068   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:34.886083   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:34.886090   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:34.889126   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:35.385120   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:35.385145   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:35.385157   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:35.385162   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:35.387912   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:35.388778   33150 node_ready.go:53] node "ha-269108-m03" has status "Ready":"False"
	I0719 18:33:35.885030   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:35.885049   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:35.885059   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:35.885065   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:35.888518   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:36.385310   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:36.385337   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:36.385348   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:36.385354   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:36.388519   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:36.885373   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:36.885396   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:36.885406   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:36.885412   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:36.888405   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:37.385233   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:37.385259   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:37.385267   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:37.385271   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:37.389256   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:37.389944   33150 node_ready.go:53] node "ha-269108-m03" has status "Ready":"False"
	I0719 18:33:37.886013   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:37.886039   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:37.886050   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:37.886057   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:37.891072   33150 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 18:33:38.386007   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:38.386030   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:38.386041   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:38.386047   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:38.389299   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:38.885238   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:38.885256   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:38.885265   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:38.885270   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:38.902954   33150 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0719 18:33:39.385197   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:39.385217   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:39.385226   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:39.385229   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:39.388731   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:39.885205   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:39.885228   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:39.885236   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:39.885241   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:39.888329   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:39.889154   33150 node_ready.go:53] node "ha-269108-m03" has status "Ready":"False"
	I0719 18:33:40.385081   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:40.385102   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:40.385110   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:40.385113   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:40.388090   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:40.885692   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:40.885718   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:40.885727   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:40.885733   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:40.888685   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:41.385489   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:41.385508   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:41.385515   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:41.385518   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:41.389220   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:41.885275   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:41.885297   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:41.885304   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:41.885308   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:41.888469   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:42.385306   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:42.385328   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:42.385337   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:42.385340   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:42.388266   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:42.389088   33150 node_ready.go:53] node "ha-269108-m03" has status "Ready":"False"
	I0719 18:33:42.885092   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:42.885126   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:42.885133   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:42.885138   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:42.888294   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:43.385303   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:43.385345   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:43.385358   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:43.385363   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:43.388120   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:43.885838   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:43.885859   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:43.885869   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:43.885875   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:43.888940   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:44.385989   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:44.386011   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:44.386022   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:44.386026   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:44.389337   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:44.389897   33150 node_ready.go:53] node "ha-269108-m03" has status "Ready":"False"
	I0719 18:33:44.885118   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:44.885136   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:44.885144   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:44.885147   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:44.887989   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:45.385963   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:45.385993   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:45.386004   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:45.386013   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:45.389083   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:45.886036   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:45.886057   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:45.886065   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:45.886070   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:45.889176   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:46.385753   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:46.385773   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:46.385781   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:46.385786   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:46.389028   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:46.885870   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:46.885909   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:46.885919   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:46.885927   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:46.888961   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:46.889579   33150 node_ready.go:53] node "ha-269108-m03" has status "Ready":"False"
	I0719 18:33:47.386039   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:47.386063   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:47.386076   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:47.386082   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:47.388953   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:47.885558   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:47.885588   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:47.885598   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:47.885604   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:47.889107   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:48.385517   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:48.385538   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:48.385547   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:48.385553   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:48.388811   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:48.885662   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:48.885682   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:48.885694   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:48.885701   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:48.888700   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:49.385254   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:49.385273   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:49.385281   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:49.385288   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:49.388281   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:49.388942   33150 node_ready.go:53] node "ha-269108-m03" has status "Ready":"False"
	I0719 18:33:49.885335   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:49.885355   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:49.885364   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:49.885368   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:49.888183   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:50.385196   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:50.385229   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:50.385240   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:50.385246   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:50.388735   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:50.885864   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:50.885896   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:50.885905   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:50.885908   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:50.890726   33150 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 18:33:50.891652   33150 node_ready.go:49] node "ha-269108-m03" has status "Ready":"True"
	I0719 18:33:50.891677   33150 node_ready.go:38] duration metric: took 17.506791833s for node "ha-269108-m03" to be "Ready" ...
	I0719 18:33:50.891689   33150 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 18:33:50.891762   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods
	I0719 18:33:50.891780   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:50.891790   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:50.891797   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:50.898298   33150 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 18:33:50.904298   33150 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-frb69" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:50.904362   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-frb69
	I0719 18:33:50.904370   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:50.904377   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:50.904380   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:50.906974   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:50.907762   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:33:50.907781   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:50.907792   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:50.907798   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:50.910343   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:50.910922   33150 pod_ready.go:92] pod "coredns-7db6d8ff4d-frb69" in "kube-system" namespace has status "Ready":"True"
	I0719 18:33:50.910936   33150 pod_ready.go:81] duration metric: took 6.618815ms for pod "coredns-7db6d8ff4d-frb69" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:50.910945   33150 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zxh8h" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:50.910992   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-zxh8h
	I0719 18:33:50.910999   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:50.911006   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:50.911012   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:50.913213   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:50.913788   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:33:50.913805   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:50.913815   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:50.913822   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:50.915769   33150 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 18:33:50.916429   33150 pod_ready.go:92] pod "coredns-7db6d8ff4d-zxh8h" in "kube-system" namespace has status "Ready":"True"
	I0719 18:33:50.916445   33150 pod_ready.go:81] duration metric: took 5.492683ms for pod "coredns-7db6d8ff4d-zxh8h" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:50.916452   33150 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-269108" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:50.916494   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/etcd-ha-269108
	I0719 18:33:50.916500   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:50.916507   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:50.916512   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:50.919032   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:50.919555   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:33:50.919571   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:50.919581   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:50.919588   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:50.921939   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:50.922568   33150 pod_ready.go:92] pod "etcd-ha-269108" in "kube-system" namespace has status "Ready":"True"
	I0719 18:33:50.922582   33150 pod_ready.go:81] duration metric: took 6.124343ms for pod "etcd-ha-269108" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:50.922590   33150 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-269108-m02" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:50.922634   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/etcd-ha-269108-m02
	I0719 18:33:50.922642   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:50.922649   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:50.922657   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:50.925078   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:50.926211   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:33:50.926225   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:50.926236   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:50.926243   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:50.928592   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:50.929127   33150 pod_ready.go:92] pod "etcd-ha-269108-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 18:33:50.929142   33150 pod_ready.go:81] duration metric: took 6.539503ms for pod "etcd-ha-269108-m02" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:50.929151   33150 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-269108-m03" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:51.086528   33150 request.go:629] Waited for 157.314288ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/etcd-ha-269108-m03
	I0719 18:33:51.086590   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/etcd-ha-269108-m03
	I0719 18:33:51.086595   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:51.086602   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:51.086606   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:51.089686   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:51.286732   33150 request.go:629] Waited for 196.367056ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:51.286787   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:51.286792   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:51.286809   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:51.286813   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:51.289916   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:51.290565   33150 pod_ready.go:92] pod "etcd-ha-269108-m03" in "kube-system" namespace has status "Ready":"True"
	I0719 18:33:51.290585   33150 pod_ready.go:81] duration metric: took 361.427312ms for pod "etcd-ha-269108-m03" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:51.290605   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-269108" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:51.486938   33150 request.go:629] Waited for 196.242049ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-269108
	I0719 18:33:51.487014   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-269108
	I0719 18:33:51.487019   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:51.487026   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:51.487031   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:51.490314   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:51.686276   33150 request.go:629] Waited for 195.330296ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:33:51.686334   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:33:51.686341   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:51.686350   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:51.686360   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:51.689202   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:51.689805   33150 pod_ready.go:92] pod "kube-apiserver-ha-269108" in "kube-system" namespace has status "Ready":"True"
	I0719 18:33:51.689825   33150 pod_ready.go:81] duration metric: took 399.209615ms for pod "kube-apiserver-ha-269108" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:51.689834   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-269108-m02" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:51.886890   33150 request.go:629] Waited for 196.990678ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-269108-m02
	I0719 18:33:51.886959   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-269108-m02
	I0719 18:33:51.886965   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:51.886973   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:51.886977   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:51.890150   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:52.086074   33150 request.go:629] Waited for 195.295195ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:33:52.086139   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:33:52.086144   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:52.086151   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:52.086157   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:52.089406   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:52.089845   33150 pod_ready.go:92] pod "kube-apiserver-ha-269108-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 18:33:52.089860   33150 pod_ready.go:81] duration metric: took 400.019512ms for pod "kube-apiserver-ha-269108-m02" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:52.089872   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-269108-m03" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:52.285881   33150 request.go:629] Waited for 195.938468ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-269108-m03
	I0719 18:33:52.285952   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-269108-m03
	I0719 18:33:52.285960   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:52.285971   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:52.285977   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:52.289331   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:52.486616   33150 request.go:629] Waited for 196.390046ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:52.486688   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:52.486694   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:52.486706   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:52.486713   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:52.489835   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:52.490460   33150 pod_ready.go:92] pod "kube-apiserver-ha-269108-m03" in "kube-system" namespace has status "Ready":"True"
	I0719 18:33:52.490481   33150 pod_ready.go:81] duration metric: took 400.601648ms for pod "kube-apiserver-ha-269108-m03" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:52.490496   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-269108" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:52.686880   33150 request.go:629] Waited for 196.317164ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-269108
	I0719 18:33:52.686945   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-269108
	I0719 18:33:52.686954   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:52.686988   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:52.686999   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:52.690489   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:52.886570   33150 request.go:629] Waited for 195.328532ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:33:52.886623   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:33:52.886633   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:52.886640   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:52.886644   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:52.889770   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:52.890296   33150 pod_ready.go:92] pod "kube-controller-manager-ha-269108" in "kube-system" namespace has status "Ready":"True"
	I0719 18:33:52.890313   33150 pod_ready.go:81] duration metric: took 399.809414ms for pod "kube-controller-manager-ha-269108" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:52.890323   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-269108-m02" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:53.086423   33150 request.go:629] Waited for 196.028268ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-269108-m02
	I0719 18:33:53.086492   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-269108-m02
	I0719 18:33:53.086500   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:53.086513   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:53.086522   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:53.089646   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:53.286712   33150 request.go:629] Waited for 196.358979ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:33:53.286790   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:33:53.286798   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:53.286808   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:53.286816   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:53.290404   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:53.291379   33150 pod_ready.go:92] pod "kube-controller-manager-ha-269108-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 18:33:53.291402   33150 pod_ready.go:81] duration metric: took 401.073104ms for pod "kube-controller-manager-ha-269108-m02" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:53.291411   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-269108-m03" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:53.486727   33150 request.go:629] Waited for 195.2611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-269108-m03
	I0719 18:33:53.486801   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-269108-m03
	I0719 18:33:53.486809   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:53.486819   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:53.486828   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:53.489707   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:53.686861   33150 request.go:629] Waited for 196.357422ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:53.686908   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:53.686913   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:53.686919   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:53.686923   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:53.689982   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:53.690888   33150 pod_ready.go:92] pod "kube-controller-manager-ha-269108-m03" in "kube-system" namespace has status "Ready":"True"
	I0719 18:33:53.690908   33150 pod_ready.go:81] duration metric: took 399.490475ms for pod "kube-controller-manager-ha-269108-m03" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:53.690918   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9zpp2" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:53.886841   33150 request.go:629] Waited for 195.858187ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9zpp2
	I0719 18:33:53.886910   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9zpp2
	I0719 18:33:53.886915   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:53.886923   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:53.886931   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:53.889909   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:54.085887   33150 request.go:629] Waited for 195.273343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:54.085936   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:54.085941   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:54.085950   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:54.085956   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:54.089208   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:54.089940   33150 pod_ready.go:92] pod "kube-proxy-9zpp2" in "kube-system" namespace has status "Ready":"True"
	I0719 18:33:54.089957   33150 pod_ready.go:81] duration metric: took 399.033ms for pod "kube-proxy-9zpp2" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:54.089967   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pkjns" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:54.286451   33150 request.go:629] Waited for 196.406428ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pkjns
	I0719 18:33:54.286512   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pkjns
	I0719 18:33:54.286518   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:54.286525   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:54.286530   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:54.290305   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:54.486310   33150 request.go:629] Waited for 195.343805ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:33:54.486363   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:33:54.486368   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:54.486382   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:54.486389   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:54.489415   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:54.490207   33150 pod_ready.go:92] pod "kube-proxy-pkjns" in "kube-system" namespace has status "Ready":"True"
	I0719 18:33:54.490227   33150 pod_ready.go:81] duration metric: took 400.253784ms for pod "kube-proxy-pkjns" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:54.490236   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vr7bz" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:54.686250   33150 request.go:629] Waited for 195.947003ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vr7bz
	I0719 18:33:54.686316   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vr7bz
	I0719 18:33:54.686321   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:54.686334   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:54.686339   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:54.689714   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:54.886045   33150 request.go:629] Waited for 195.278872ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:33:54.886097   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:33:54.886102   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:54.886110   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:54.886114   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:54.889463   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:54.890278   33150 pod_ready.go:92] pod "kube-proxy-vr7bz" in "kube-system" namespace has status "Ready":"True"
	I0719 18:33:54.890299   33150 pod_ready.go:81] duration metric: took 400.056328ms for pod "kube-proxy-vr7bz" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:54.890312   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-269108" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:55.086378   33150 request.go:629] Waited for 196.00321ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-269108
	I0719 18:33:55.086434   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-269108
	I0719 18:33:55.086441   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:55.086451   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:55.086457   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:55.089184   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:55.285942   33150 request.go:629] Waited for 196.156276ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:33:55.286006   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108
	I0719 18:33:55.286013   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:55.286023   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:55.286028   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:55.289161   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:55.289806   33150 pod_ready.go:92] pod "kube-scheduler-ha-269108" in "kube-system" namespace has status "Ready":"True"
	I0719 18:33:55.289824   33150 pod_ready.go:81] duration metric: took 399.504873ms for pod "kube-scheduler-ha-269108" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:55.289833   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-269108-m02" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:55.485994   33150 request.go:629] Waited for 196.089371ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-269108-m02
	I0719 18:33:55.486077   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-269108-m02
	I0719 18:33:55.486083   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:55.486093   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:55.486098   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:55.489047   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:55.685881   33150 request.go:629] Waited for 196.165669ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:33:55.685947   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m02
	I0719 18:33:55.685954   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:55.685965   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:55.685973   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:55.689080   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:55.689711   33150 pod_ready.go:92] pod "kube-scheduler-ha-269108-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 18:33:55.689727   33150 pod_ready.go:81] duration metric: took 399.88846ms for pod "kube-scheduler-ha-269108-m02" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:55.689736   33150 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-269108-m03" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:55.886831   33150 request.go:629] Waited for 197.025983ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-269108-m03
	I0719 18:33:55.886910   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-269108-m03
	I0719 18:33:55.886917   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:55.886927   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:55.886941   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:55.890802   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:56.086044   33150 request.go:629] Waited for 194.690243ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:56.086119   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes/ha-269108-m03
	I0719 18:33:56.086128   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:56.086136   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:56.086143   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:56.088917   33150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 18:33:56.089462   33150 pod_ready.go:92] pod "kube-scheduler-ha-269108-m03" in "kube-system" namespace has status "Ready":"True"
	I0719 18:33:56.089482   33150 pod_ready.go:81] duration metric: took 399.73818ms for pod "kube-scheduler-ha-269108-m03" in "kube-system" namespace to be "Ready" ...
	I0719 18:33:56.089497   33150 pod_ready.go:38] duration metric: took 5.197794662s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 18:33:56.089514   33150 api_server.go:52] waiting for apiserver process to appear ...
	I0719 18:33:56.089573   33150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 18:33:56.106014   33150 api_server.go:72] duration metric: took 22.933236564s to wait for apiserver process to appear ...
	I0719 18:33:56.106029   33150 api_server.go:88] waiting for apiserver healthz status ...
	I0719 18:33:56.106043   33150 api_server.go:253] Checking apiserver healthz at https://192.168.39.163:8443/healthz ...
	I0719 18:33:56.110067   33150 api_server.go:279] https://192.168.39.163:8443/healthz returned 200:
	ok
	I0719 18:33:56.110133   33150 round_trippers.go:463] GET https://192.168.39.163:8443/version
	I0719 18:33:56.110143   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:56.110159   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:56.110170   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:56.110998   33150 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0719 18:33:56.111087   33150 api_server.go:141] control plane version: v1.30.3
	I0719 18:33:56.111104   33150 api_server.go:131] duration metric: took 5.067921ms to wait for apiserver health ...
	I0719 18:33:56.111112   33150 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 18:33:56.286511   33150 request.go:629] Waited for 175.332478ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods
	I0719 18:33:56.286581   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods
	I0719 18:33:56.286586   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:56.286594   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:56.286598   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:56.293303   33150 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 18:33:56.300529   33150 system_pods.go:59] 24 kube-system pods found
	I0719 18:33:56.300561   33150 system_pods.go:61] "coredns-7db6d8ff4d-frb69" [bd2dc6fa-ab54-44e1-b3b1-3e63af76140b] Running
	I0719 18:33:56.300566   33150 system_pods.go:61] "coredns-7db6d8ff4d-zxh8h" [1016c72c-a0e0-41fe-8f64-1ba03d7ce1c8] Running
	I0719 18:33:56.300570   33150 system_pods.go:61] "etcd-ha-269108" [dc807567-f8b9-44c5-abda-6dd170f77bf2] Running
	I0719 18:33:56.300573   33150 system_pods.go:61] "etcd-ha-269108-m02" [5f17ee06-e4d1-4e51-a8de-80bf7d5e644b] Running
	I0719 18:33:56.300577   33150 system_pods.go:61] "etcd-ha-269108-m03" [03383126-0715-4c44-8c71-58f6a3ad717f] Running
	I0719 18:33:56.300582   33150 system_pods.go:61] "kindnet-4lhkr" [92211e52-9a67-4b01-a4af-0769596d7103] Running
	I0719 18:33:56.300585   33150 system_pods.go:61] "kindnet-bmcvm" [41d2a7b7-173e-402e-9323-f6efed16ce26] Running
	I0719 18:33:56.300587   33150 system_pods.go:61] "kindnet-fs5wk" [f5461a69-fd24-47dd-8376-f8492c8dad0b] Running
	I0719 18:33:56.300591   33150 system_pods.go:61] "kube-apiserver-ha-269108" [99659542-3ba1-4035-83a1-8729586d2cc0] Running
	I0719 18:33:56.300594   33150 system_pods.go:61] "kube-apiserver-ha-269108-m02" [2637cd4e-b5b5-4d1d-9913-c5cd000b9d48] Running
	I0719 18:33:56.300597   33150 system_pods.go:61] "kube-apiserver-ha-269108-m03" [409c3247-2b63-44b3-8563-3be63e699ba8] Running
	I0719 18:33:56.300600   33150 system_pods.go:61] "kube-controller-manager-ha-269108" [def21bee-d990-48ec-88d3-5f6b8aa1f7c9] Running
	I0719 18:33:56.300604   33150 system_pods.go:61] "kube-controller-manager-ha-269108-m02" [e4074996-8a87-4788-af15-172f1ac7fe67] Running
	I0719 18:33:56.300607   33150 system_pods.go:61] "kube-controller-manager-ha-269108-m03" [aa4307d2-ee77-4c87-8a9c-668e56ea9ca5] Running
	I0719 18:33:56.300610   33150 system_pods.go:61] "kube-proxy-9zpp2" [395be6bf-b8bd-4848-8550-cbc2e647c238] Running
	I0719 18:33:56.300613   33150 system_pods.go:61] "kube-proxy-pkjns" [f97d4b6b-eb80-46c3-9513-ef6e9af5318e] Running
	I0719 18:33:56.300616   33150 system_pods.go:61] "kube-proxy-vr7bz" [cae5da2f-2ee1-4b82-8471-479758fbab4f] Running
	I0719 18:33:56.300620   33150 system_pods.go:61] "kube-scheduler-ha-269108" [07b87e93-0008-46b6-90f2-3e933bb568b0] Running
	I0719 18:33:56.300624   33150 system_pods.go:61] "kube-scheduler-ha-269108-m02" [1dbd98dd-a9b7-464e-9380-d15ee99ab168] Running
	I0719 18:33:56.300626   33150 system_pods.go:61] "kube-scheduler-ha-269108-m03" [a6501f5b-579b-4940-bbcb-1ed60b08a0ef] Running
	I0719 18:33:56.300629   33150 system_pods.go:61] "kube-vip-ha-269108" [a72f4143-401a-41a3-8adc-4cfc75369797] Running
	I0719 18:33:56.300636   33150 system_pods.go:61] "kube-vip-ha-269108-m02" [5e41e24e-112e-420e-b359-12a6540e9e34] Running
	I0719 18:33:56.300641   33150 system_pods.go:61] "kube-vip-ha-269108-m03" [c6a6f347-4ae3-4401-b654-895809bfcce0] Running
	I0719 18:33:56.300646   33150 system_pods.go:61] "storage-provisioner" [38aaccd6-4fb9-4744-b502-261ebeaf476a] Running
	I0719 18:33:56.300651   33150 system_pods.go:74] duration metric: took 189.533061ms to wait for pod list to return data ...
	I0719 18:33:56.300660   33150 default_sa.go:34] waiting for default service account to be created ...
	I0719 18:33:56.486185   33150 request.go:629] Waited for 185.465008ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/default/serviceaccounts
	I0719 18:33:56.486235   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/default/serviceaccounts
	I0719 18:33:56.486240   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:56.486247   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:56.486251   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:56.489296   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:56.489399   33150 default_sa.go:45] found service account: "default"
	I0719 18:33:56.489412   33150 default_sa.go:55] duration metric: took 188.746699ms for default service account to be created ...
	I0719 18:33:56.489419   33150 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 18:33:56.686064   33150 request.go:629] Waited for 196.570052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods
	I0719 18:33:56.686128   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/namespaces/kube-system/pods
	I0719 18:33:56.686135   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:56.686146   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:56.686152   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:56.692563   33150 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 18:33:56.698840   33150 system_pods.go:86] 24 kube-system pods found
	I0719 18:33:56.698865   33150 system_pods.go:89] "coredns-7db6d8ff4d-frb69" [bd2dc6fa-ab54-44e1-b3b1-3e63af76140b] Running
	I0719 18:33:56.698872   33150 system_pods.go:89] "coredns-7db6d8ff4d-zxh8h" [1016c72c-a0e0-41fe-8f64-1ba03d7ce1c8] Running
	I0719 18:33:56.698876   33150 system_pods.go:89] "etcd-ha-269108" [dc807567-f8b9-44c5-abda-6dd170f77bf2] Running
	I0719 18:33:56.698880   33150 system_pods.go:89] "etcd-ha-269108-m02" [5f17ee06-e4d1-4e51-a8de-80bf7d5e644b] Running
	I0719 18:33:56.698884   33150 system_pods.go:89] "etcd-ha-269108-m03" [03383126-0715-4c44-8c71-58f6a3ad717f] Running
	I0719 18:33:56.698889   33150 system_pods.go:89] "kindnet-4lhkr" [92211e52-9a67-4b01-a4af-0769596d7103] Running
	I0719 18:33:56.698894   33150 system_pods.go:89] "kindnet-bmcvm" [41d2a7b7-173e-402e-9323-f6efed16ce26] Running
	I0719 18:33:56.698901   33150 system_pods.go:89] "kindnet-fs5wk" [f5461a69-fd24-47dd-8376-f8492c8dad0b] Running
	I0719 18:33:56.698909   33150 system_pods.go:89] "kube-apiserver-ha-269108" [99659542-3ba1-4035-83a1-8729586d2cc0] Running
	I0719 18:33:56.698915   33150 system_pods.go:89] "kube-apiserver-ha-269108-m02" [2637cd4e-b5b5-4d1d-9913-c5cd000b9d48] Running
	I0719 18:33:56.698924   33150 system_pods.go:89] "kube-apiserver-ha-269108-m03" [409c3247-2b63-44b3-8563-3be63e699ba8] Running
	I0719 18:33:56.698929   33150 system_pods.go:89] "kube-controller-manager-ha-269108" [def21bee-d990-48ec-88d3-5f6b8aa1f7c9] Running
	I0719 18:33:56.698936   33150 system_pods.go:89] "kube-controller-manager-ha-269108-m02" [e4074996-8a87-4788-af15-172f1ac7fe67] Running
	I0719 18:33:56.698944   33150 system_pods.go:89] "kube-controller-manager-ha-269108-m03" [aa4307d2-ee77-4c87-8a9c-668e56ea9ca5] Running
	I0719 18:33:56.698948   33150 system_pods.go:89] "kube-proxy-9zpp2" [395be6bf-b8bd-4848-8550-cbc2e647c238] Running
	I0719 18:33:56.698955   33150 system_pods.go:89] "kube-proxy-pkjns" [f97d4b6b-eb80-46c3-9513-ef6e9af5318e] Running
	I0719 18:33:56.698959   33150 system_pods.go:89] "kube-proxy-vr7bz" [cae5da2f-2ee1-4b82-8471-479758fbab4f] Running
	I0719 18:33:56.698965   33150 system_pods.go:89] "kube-scheduler-ha-269108" [07b87e93-0008-46b6-90f2-3e933bb568b0] Running
	I0719 18:33:56.698969   33150 system_pods.go:89] "kube-scheduler-ha-269108-m02" [1dbd98dd-a9b7-464e-9380-d15ee99ab168] Running
	I0719 18:33:56.698975   33150 system_pods.go:89] "kube-scheduler-ha-269108-m03" [a6501f5b-579b-4940-bbcb-1ed60b08a0ef] Running
	I0719 18:33:56.698979   33150 system_pods.go:89] "kube-vip-ha-269108" [a72f4143-401a-41a3-8adc-4cfc75369797] Running
	I0719 18:33:56.698985   33150 system_pods.go:89] "kube-vip-ha-269108-m02" [5e41e24e-112e-420e-b359-12a6540e9e34] Running
	I0719 18:33:56.698988   33150 system_pods.go:89] "kube-vip-ha-269108-m03" [c6a6f347-4ae3-4401-b654-895809bfcce0] Running
	I0719 18:33:56.698994   33150 system_pods.go:89] "storage-provisioner" [38aaccd6-4fb9-4744-b502-261ebeaf476a] Running
	I0719 18:33:56.699001   33150 system_pods.go:126] duration metric: took 209.577876ms to wait for k8s-apps to be running ...
	I0719 18:33:56.699013   33150 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 18:33:56.699064   33150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 18:33:56.714749   33150 system_svc.go:56] duration metric: took 15.731122ms WaitForService to wait for kubelet
	I0719 18:33:56.714778   33150 kubeadm.go:582] duration metric: took 23.542000656s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 18:33:56.714800   33150 node_conditions.go:102] verifying NodePressure condition ...
	I0719 18:33:56.886307   33150 request.go:629] Waited for 171.436423ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.163:8443/api/v1/nodes
	I0719 18:33:56.886367   33150 round_trippers.go:463] GET https://192.168.39.163:8443/api/v1/nodes
	I0719 18:33:56.886375   33150 round_trippers.go:469] Request Headers:
	I0719 18:33:56.886385   33150 round_trippers.go:473]     Accept: application/json, */*
	I0719 18:33:56.886392   33150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 18:33:56.890097   33150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 18:33:56.891049   33150 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 18:33:56.891068   33150 node_conditions.go:123] node cpu capacity is 2
	I0719 18:33:56.891080   33150 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 18:33:56.891089   33150 node_conditions.go:123] node cpu capacity is 2
	I0719 18:33:56.891095   33150 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 18:33:56.891102   33150 node_conditions.go:123] node cpu capacity is 2
	I0719 18:33:56.891110   33150 node_conditions.go:105] duration metric: took 176.304067ms to run NodePressure ...
	I0719 18:33:56.891128   33150 start.go:241] waiting for startup goroutines ...
	I0719 18:33:56.891157   33150 start.go:255] writing updated cluster config ...
	I0719 18:33:56.891475   33150 ssh_runner.go:195] Run: rm -f paused
	I0719 18:33:56.942189   33150 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 18:33:56.944116   33150 out.go:177] * Done! kubectl is now configured to use "ha-269108" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 19 18:38:37 ha-269108 crio[681]: time="2024-07-19 18:38:37.269386692Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721414317269364550,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0f77f884-bd8d-4ce3-ac57-9fb0c71c3afd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 18:38:37 ha-269108 crio[681]: time="2024-07-19 18:38:37.270063577Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=31c9736e-ac9c-4b9f-a641-898e7684bb69 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:38:37 ha-269108 crio[681]: time="2024-07-19 18:38:37.270119493Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=31c9736e-ac9c-4b9f-a641-898e7684bb69 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:38:37 ha-269108 crio[681]: time="2024-07-19 18:38:37.270435819Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7246971f05346f2dc79fb64d87f9e45c4dbe98d10104f0ef433c848235f3b80,PodSandboxId:8ce5a6a0852e27942f0fc5b9171cce34e14a87ce419545c3842e9353ab6324b8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721414041026619137,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8qqtr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c0699d4b-e052-40cf-b41c-ab604d64531c,},Annotations:map[string]string{io.kubernetes.container.hash: 5c041215,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a98e1c0fea9d11195dfc6c8ea7f24783d955e7e8b78f7ab0613caaaec2fdb2ec,PodSandboxId:6a81dda96a2e125ee2baa5c574c3f65c8ad4e3da7e8fb629549d2cae5fffeacb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721413903514218061,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38aaccd6-4fb9-4744-b502-261ebeaf476a,},Annotations:map[string]string{io.kubernetes.container.hash: 8461d301,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:160fdbf709fdfd0919bdc4cac02da0844e0b17c6e6272326ed4de2e8c824f16a,PodSandboxId:7a1d288c9d262c0228c5bc042b7dcf56aee7adfe173e152f2e29c3f98ad72afa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721413903448828320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-frb69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd2dc6fa-ab54-44e1-b3b1-3e63af76140b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a98894c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e3cdc51f7ed094ed5462f11f5cff75cb3d6d30639d4cb6e22c13a07f94a6433,PodSandboxId:3932d100e152d2ea42e6eb0df146808e82dc875697eb60499c0321a97ca99df2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721413903464875569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zxh8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1016c72c-a0
e0-41fe-8f64-1ba03d7ce1c8,},Annotations:map[string]string{io.kubernetes.container.hash: 7619a7d5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b0808457c5c75d11706b6ffd7814fd4e46226d94c503fb77c9aa9c3739be671,PodSandboxId:eb0e9822a0f4e6fa2ff72be57c5b6d30df9a5b7e0c7ee0c4cc4aae2a6faaf91a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721413891286046511,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4lhkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92211e52-9a67-4b01-a4af-0769596d7103,},Annotations:map[string]string{io.kubernetes.container.hash: a2713f56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f7a66979140fe738b08e527164c2369f41fd064fa704fc9fee868074b4a452,PodSandboxId:1f4d197a59cd5ff468ce92a597b19649b7d39296e988d499593484af548c9fb9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172141388
7206508706,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkjns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f97d4b6b-eb80-46c3-9513-ef6e9af5318e,},Annotations:map[string]string{io.kubernetes.container.hash: b6a228cf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e60da0dad65c5ab983f0fbc1d10420e853f41839ab866eaaf09371aaf0491f8d,PodSandboxId:e951c3e25d5b2fc3a0068871912a7cdcb05b7001b553b74b20fd53a2deccd1c8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17214138697
00517880,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8de71ad79df9fe13615bfb38834f9d84,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00761e9db17ecfad96376a93ad36118da3b611230214d3f6b730aed8037c9f74,PodSandboxId:9a07a59b0e0e4715d5e80dcbe88bfacb960a6a0b4f96bd8f70828ab198e71bf5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721413866852487199,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a30a862431730cad16e447d55a4fb821,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7d66135b54e45d783390d42b37d07758fbd3c77fef3397a9f916668a18a5cdf,PodSandboxId:4d82589b51ae72d8e5c19644bffc61640df7d2b05cf61a96a43b290166e1b65c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721413866853583736,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d46eaeb953414d3529b8c9640aa3b7e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6b1a8807,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65cdebd7fead61555f3eb344ede3b87169b46a3d890899b185210a547160c51d,PodSandboxId:4473d4683f2e13119e84131069ee05447abc7ffc6505f1cadf4505b54f90ee42,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721413866792666659,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.po
d.name: etcd-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dffd3f3edf9c9be1c68e639689e4e06a,},Annotations:map[string]string{io.kubernetes.container.hash: 6829aeb5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d91756bd0168a43e8b6f1ae9046710c68f13345e57b69b224b80195614364f,PodSandboxId:30c0f83ae920504360c74ec553bb005959425ea38a43c099c740baac94232898,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721413866820045920,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 295bf86c7591ac85ae0841105f8684b3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=31c9736e-ac9c-4b9f-a641-898e7684bb69 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:38:37 ha-269108 crio[681]: time="2024-07-19 18:38:37.308488356Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f1fecff3-f346-4efe-86f4-66a5b511ee1f name=/runtime.v1.RuntimeService/Version
	Jul 19 18:38:37 ha-269108 crio[681]: time="2024-07-19 18:38:37.308565843Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f1fecff3-f346-4efe-86f4-66a5b511ee1f name=/runtime.v1.RuntimeService/Version
	Jul 19 18:38:37 ha-269108 crio[681]: time="2024-07-19 18:38:37.309751838Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2c838c6c-06e9-4cda-a4ea-1aec323230a2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 18:38:37 ha-269108 crio[681]: time="2024-07-19 18:38:37.310386977Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721414317310359707,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c838c6c-06e9-4cda-a4ea-1aec323230a2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 18:38:37 ha-269108 crio[681]: time="2024-07-19 18:38:37.311022378Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=36b1c451-a29d-479a-884e-4ca327759e01 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:38:37 ha-269108 crio[681]: time="2024-07-19 18:38:37.311077847Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=36b1c451-a29d-479a-884e-4ca327759e01 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:38:37 ha-269108 crio[681]: time="2024-07-19 18:38:37.311357938Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7246971f05346f2dc79fb64d87f9e45c4dbe98d10104f0ef433c848235f3b80,PodSandboxId:8ce5a6a0852e27942f0fc5b9171cce34e14a87ce419545c3842e9353ab6324b8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721414041026619137,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8qqtr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c0699d4b-e052-40cf-b41c-ab604d64531c,},Annotations:map[string]string{io.kubernetes.container.hash: 5c041215,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a98e1c0fea9d11195dfc6c8ea7f24783d955e7e8b78f7ab0613caaaec2fdb2ec,PodSandboxId:6a81dda96a2e125ee2baa5c574c3f65c8ad4e3da7e8fb629549d2cae5fffeacb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721413903514218061,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38aaccd6-4fb9-4744-b502-261ebeaf476a,},Annotations:map[string]string{io.kubernetes.container.hash: 8461d301,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:160fdbf709fdfd0919bdc4cac02da0844e0b17c6e6272326ed4de2e8c824f16a,PodSandboxId:7a1d288c9d262c0228c5bc042b7dcf56aee7adfe173e152f2e29c3f98ad72afa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721413903448828320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-frb69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd2dc6fa-ab54-44e1-b3b1-3e63af76140b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a98894c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e3cdc51f7ed094ed5462f11f5cff75cb3d6d30639d4cb6e22c13a07f94a6433,PodSandboxId:3932d100e152d2ea42e6eb0df146808e82dc875697eb60499c0321a97ca99df2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721413903464875569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zxh8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1016c72c-a0
e0-41fe-8f64-1ba03d7ce1c8,},Annotations:map[string]string{io.kubernetes.container.hash: 7619a7d5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b0808457c5c75d11706b6ffd7814fd4e46226d94c503fb77c9aa9c3739be671,PodSandboxId:eb0e9822a0f4e6fa2ff72be57c5b6d30df9a5b7e0c7ee0c4cc4aae2a6faaf91a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721413891286046511,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4lhkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92211e52-9a67-4b01-a4af-0769596d7103,},Annotations:map[string]string{io.kubernetes.container.hash: a2713f56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f7a66979140fe738b08e527164c2369f41fd064fa704fc9fee868074b4a452,PodSandboxId:1f4d197a59cd5ff468ce92a597b19649b7d39296e988d499593484af548c9fb9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172141388
7206508706,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkjns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f97d4b6b-eb80-46c3-9513-ef6e9af5318e,},Annotations:map[string]string{io.kubernetes.container.hash: b6a228cf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e60da0dad65c5ab983f0fbc1d10420e853f41839ab866eaaf09371aaf0491f8d,PodSandboxId:e951c3e25d5b2fc3a0068871912a7cdcb05b7001b553b74b20fd53a2deccd1c8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17214138697
00517880,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8de71ad79df9fe13615bfb38834f9d84,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00761e9db17ecfad96376a93ad36118da3b611230214d3f6b730aed8037c9f74,PodSandboxId:9a07a59b0e0e4715d5e80dcbe88bfacb960a6a0b4f96bd8f70828ab198e71bf5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721413866852487199,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a30a862431730cad16e447d55a4fb821,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7d66135b54e45d783390d42b37d07758fbd3c77fef3397a9f916668a18a5cdf,PodSandboxId:4d82589b51ae72d8e5c19644bffc61640df7d2b05cf61a96a43b290166e1b65c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721413866853583736,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d46eaeb953414d3529b8c9640aa3b7e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6b1a8807,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65cdebd7fead61555f3eb344ede3b87169b46a3d890899b185210a547160c51d,PodSandboxId:4473d4683f2e13119e84131069ee05447abc7ffc6505f1cadf4505b54f90ee42,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721413866792666659,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.po
d.name: etcd-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dffd3f3edf9c9be1c68e639689e4e06a,},Annotations:map[string]string{io.kubernetes.container.hash: 6829aeb5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d91756bd0168a43e8b6f1ae9046710c68f13345e57b69b224b80195614364f,PodSandboxId:30c0f83ae920504360c74ec553bb005959425ea38a43c099c740baac94232898,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721413866820045920,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 295bf86c7591ac85ae0841105f8684b3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=36b1c451-a29d-479a-884e-4ca327759e01 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:38:37 ha-269108 crio[681]: time="2024-07-19 18:38:37.346663614Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5c814bdb-4e3f-49cd-8c85-c82e1f173922 name=/runtime.v1.RuntimeService/Version
	Jul 19 18:38:37 ha-269108 crio[681]: time="2024-07-19 18:38:37.346741925Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5c814bdb-4e3f-49cd-8c85-c82e1f173922 name=/runtime.v1.RuntimeService/Version
	Jul 19 18:38:37 ha-269108 crio[681]: time="2024-07-19 18:38:37.347927372Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=347edc85-ad4b-4847-808b-fc23c456543b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 18:38:37 ha-269108 crio[681]: time="2024-07-19 18:38:37.348439548Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721414317348395581,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=347edc85-ad4b-4847-808b-fc23c456543b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 18:38:37 ha-269108 crio[681]: time="2024-07-19 18:38:37.349285718Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0dfb0233-f122-406d-a702-264bf224eff9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:38:37 ha-269108 crio[681]: time="2024-07-19 18:38:37.349342881Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0dfb0233-f122-406d-a702-264bf224eff9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:38:37 ha-269108 crio[681]: time="2024-07-19 18:38:37.349736879Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7246971f05346f2dc79fb64d87f9e45c4dbe98d10104f0ef433c848235f3b80,PodSandboxId:8ce5a6a0852e27942f0fc5b9171cce34e14a87ce419545c3842e9353ab6324b8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721414041026619137,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8qqtr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c0699d4b-e052-40cf-b41c-ab604d64531c,},Annotations:map[string]string{io.kubernetes.container.hash: 5c041215,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a98e1c0fea9d11195dfc6c8ea7f24783d955e7e8b78f7ab0613caaaec2fdb2ec,PodSandboxId:6a81dda96a2e125ee2baa5c574c3f65c8ad4e3da7e8fb629549d2cae5fffeacb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721413903514218061,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38aaccd6-4fb9-4744-b502-261ebeaf476a,},Annotations:map[string]string{io.kubernetes.container.hash: 8461d301,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:160fdbf709fdfd0919bdc4cac02da0844e0b17c6e6272326ed4de2e8c824f16a,PodSandboxId:7a1d288c9d262c0228c5bc042b7dcf56aee7adfe173e152f2e29c3f98ad72afa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721413903448828320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-frb69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd2dc6fa-ab54-44e1-b3b1-3e63af76140b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a98894c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e3cdc51f7ed094ed5462f11f5cff75cb3d6d30639d4cb6e22c13a07f94a6433,PodSandboxId:3932d100e152d2ea42e6eb0df146808e82dc875697eb60499c0321a97ca99df2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721413903464875569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zxh8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1016c72c-a0
e0-41fe-8f64-1ba03d7ce1c8,},Annotations:map[string]string{io.kubernetes.container.hash: 7619a7d5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b0808457c5c75d11706b6ffd7814fd4e46226d94c503fb77c9aa9c3739be671,PodSandboxId:eb0e9822a0f4e6fa2ff72be57c5b6d30df9a5b7e0c7ee0c4cc4aae2a6faaf91a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721413891286046511,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4lhkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92211e52-9a67-4b01-a4af-0769596d7103,},Annotations:map[string]string{io.kubernetes.container.hash: a2713f56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f7a66979140fe738b08e527164c2369f41fd064fa704fc9fee868074b4a452,PodSandboxId:1f4d197a59cd5ff468ce92a597b19649b7d39296e988d499593484af548c9fb9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172141388
7206508706,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkjns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f97d4b6b-eb80-46c3-9513-ef6e9af5318e,},Annotations:map[string]string{io.kubernetes.container.hash: b6a228cf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e60da0dad65c5ab983f0fbc1d10420e853f41839ab866eaaf09371aaf0491f8d,PodSandboxId:e951c3e25d5b2fc3a0068871912a7cdcb05b7001b553b74b20fd53a2deccd1c8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17214138697
00517880,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8de71ad79df9fe13615bfb38834f9d84,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00761e9db17ecfad96376a93ad36118da3b611230214d3f6b730aed8037c9f74,PodSandboxId:9a07a59b0e0e4715d5e80dcbe88bfacb960a6a0b4f96bd8f70828ab198e71bf5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721413866852487199,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a30a862431730cad16e447d55a4fb821,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7d66135b54e45d783390d42b37d07758fbd3c77fef3397a9f916668a18a5cdf,PodSandboxId:4d82589b51ae72d8e5c19644bffc61640df7d2b05cf61a96a43b290166e1b65c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721413866853583736,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d46eaeb953414d3529b8c9640aa3b7e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6b1a8807,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65cdebd7fead61555f3eb344ede3b87169b46a3d890899b185210a547160c51d,PodSandboxId:4473d4683f2e13119e84131069ee05447abc7ffc6505f1cadf4505b54f90ee42,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721413866792666659,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.po
d.name: etcd-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dffd3f3edf9c9be1c68e639689e4e06a,},Annotations:map[string]string{io.kubernetes.container.hash: 6829aeb5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d91756bd0168a43e8b6f1ae9046710c68f13345e57b69b224b80195614364f,PodSandboxId:30c0f83ae920504360c74ec553bb005959425ea38a43c099c740baac94232898,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721413866820045920,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 295bf86c7591ac85ae0841105f8684b3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0dfb0233-f122-406d-a702-264bf224eff9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:38:37 ha-269108 crio[681]: time="2024-07-19 18:38:37.387261536Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dca8549d-b027-4ffa-8216-98f2e7851b7c name=/runtime.v1.RuntimeService/Version
	Jul 19 18:38:37 ha-269108 crio[681]: time="2024-07-19 18:38:37.387338799Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dca8549d-b027-4ffa-8216-98f2e7851b7c name=/runtime.v1.RuntimeService/Version
	Jul 19 18:38:37 ha-269108 crio[681]: time="2024-07-19 18:38:37.388908782Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2e28fcf5-ba83-4c10-8a8e-331d7b7b077a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 18:38:37 ha-269108 crio[681]: time="2024-07-19 18:38:37.389385611Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721414317389361064,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2e28fcf5-ba83-4c10-8a8e-331d7b7b077a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 18:38:37 ha-269108 crio[681]: time="2024-07-19 18:38:37.389908606Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=20e15e24-8e44-4195-be7b-ccd39a7bb6de name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:38:37 ha-269108 crio[681]: time="2024-07-19 18:38:37.389966537Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=20e15e24-8e44-4195-be7b-ccd39a7bb6de name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:38:37 ha-269108 crio[681]: time="2024-07-19 18:38:37.390257396Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7246971f05346f2dc79fb64d87f9e45c4dbe98d10104f0ef433c848235f3b80,PodSandboxId:8ce5a6a0852e27942f0fc5b9171cce34e14a87ce419545c3842e9353ab6324b8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721414041026619137,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8qqtr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c0699d4b-e052-40cf-b41c-ab604d64531c,},Annotations:map[string]string{io.kubernetes.container.hash: 5c041215,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a98e1c0fea9d11195dfc6c8ea7f24783d955e7e8b78f7ab0613caaaec2fdb2ec,PodSandboxId:6a81dda96a2e125ee2baa5c574c3f65c8ad4e3da7e8fb629549d2cae5fffeacb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721413903514218061,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38aaccd6-4fb9-4744-b502-261ebeaf476a,},Annotations:map[string]string{io.kubernetes.container.hash: 8461d301,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:160fdbf709fdfd0919bdc4cac02da0844e0b17c6e6272326ed4de2e8c824f16a,PodSandboxId:7a1d288c9d262c0228c5bc042b7dcf56aee7adfe173e152f2e29c3f98ad72afa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721413903448828320,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-frb69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd2dc6fa-ab54-44e1-b3b1-3e63af76140b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a98894c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e3cdc51f7ed094ed5462f11f5cff75cb3d6d30639d4cb6e22c13a07f94a6433,PodSandboxId:3932d100e152d2ea42e6eb0df146808e82dc875697eb60499c0321a97ca99df2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721413903464875569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zxh8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1016c72c-a0
e0-41fe-8f64-1ba03d7ce1c8,},Annotations:map[string]string{io.kubernetes.container.hash: 7619a7d5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b0808457c5c75d11706b6ffd7814fd4e46226d94c503fb77c9aa9c3739be671,PodSandboxId:eb0e9822a0f4e6fa2ff72be57c5b6d30df9a5b7e0c7ee0c4cc4aae2a6faaf91a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721413891286046511,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4lhkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92211e52-9a67-4b01-a4af-0769596d7103,},Annotations:map[string]string{io.kubernetes.container.hash: a2713f56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f7a66979140fe738b08e527164c2369f41fd064fa704fc9fee868074b4a452,PodSandboxId:1f4d197a59cd5ff468ce92a597b19649b7d39296e988d499593484af548c9fb9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172141388
7206508706,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkjns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f97d4b6b-eb80-46c3-9513-ef6e9af5318e,},Annotations:map[string]string{io.kubernetes.container.hash: b6a228cf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e60da0dad65c5ab983f0fbc1d10420e853f41839ab866eaaf09371aaf0491f8d,PodSandboxId:e951c3e25d5b2fc3a0068871912a7cdcb05b7001b553b74b20fd53a2deccd1c8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17214138697
00517880,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8de71ad79df9fe13615bfb38834f9d84,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00761e9db17ecfad96376a93ad36118da3b611230214d3f6b730aed8037c9f74,PodSandboxId:9a07a59b0e0e4715d5e80dcbe88bfacb960a6a0b4f96bd8f70828ab198e71bf5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721413866852487199,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a30a862431730cad16e447d55a4fb821,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7d66135b54e45d783390d42b37d07758fbd3c77fef3397a9f916668a18a5cdf,PodSandboxId:4d82589b51ae72d8e5c19644bffc61640df7d2b05cf61a96a43b290166e1b65c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721413866853583736,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d46eaeb953414d3529b8c9640aa3b7e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6b1a8807,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65cdebd7fead61555f3eb344ede3b87169b46a3d890899b185210a547160c51d,PodSandboxId:4473d4683f2e13119e84131069ee05447abc7ffc6505f1cadf4505b54f90ee42,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721413866792666659,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.po
d.name: etcd-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dffd3f3edf9c9be1c68e639689e4e06a,},Annotations:map[string]string{io.kubernetes.container.hash: 6829aeb5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d91756bd0168a43e8b6f1ae9046710c68f13345e57b69b224b80195614364f,PodSandboxId:30c0f83ae920504360c74ec553bb005959425ea38a43c099c740baac94232898,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721413866820045920,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 295bf86c7591ac85ae0841105f8684b3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=20e15e24-8e44-4195-be7b-ccd39a7bb6de name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f7246971f0534       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   8ce5a6a0852e2       busybox-fc5497c4f-8qqtr
	a98e1c0fea9d1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   6a81dda96a2e1       storage-provisioner
	4e3cdc51f7ed0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   3932d100e152d       coredns-7db6d8ff4d-zxh8h
	160fdbf709fdf       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   7a1d288c9d262       coredns-7db6d8ff4d-frb69
	1b0808457c5c7       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    7 minutes ago       Running             kindnet-cni               0                   eb0e9822a0f4e       kindnet-4lhkr
	61f7a66979140       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      7 minutes ago       Running             kube-proxy                0                   1f4d197a59cd5       kube-proxy-pkjns
	e60da0dad65c5       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   e951c3e25d5b2       kube-vip-ha-269108
	d7d66135b54e4       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      7 minutes ago       Running             kube-apiserver            0                   4d82589b51ae7       kube-apiserver-ha-269108
	00761e9db17ec       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      7 minutes ago       Running             kube-scheduler            0                   9a07a59b0e0e4       kube-scheduler-ha-269108
	c3d91756bd016       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      7 minutes ago       Running             kube-controller-manager   0                   30c0f83ae9205       kube-controller-manager-ha-269108
	65cdebd7fead6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   4473d4683f2e1       etcd-ha-269108
	
	
	==> coredns [160fdbf709fdfd0919bdc4cac02da0844e0b17c6e6272326ed4de2e8c824f16a] <==
	[INFO] 10.244.0.4:33226 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000203273s
	[INFO] 10.244.0.4:44058 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00016506s
	[INFO] 10.244.0.4:59413 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009965799s
	[INFO] 10.244.0.4:35057 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000093714s
	[INFO] 10.244.0.4:60700 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103292s
	[INFO] 10.244.1.2:34140 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001831088s
	[INFO] 10.244.1.2:55673 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000078159s
	[INFO] 10.244.1.2:48518 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000715s
	[INFO] 10.244.1.2:60538 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070669s
	[INFO] 10.244.2.2:41729 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000195097s
	[INFO] 10.244.2.2:54527 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000110807s
	[INFO] 10.244.0.4:56953 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000073764s
	[INFO] 10.244.0.4:52941 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000231699s
	[INFO] 10.244.1.2:55601 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109961s
	[INFO] 10.244.1.2:35949 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100008s
	[INFO] 10.244.1.2:37921 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013057s
	[INFO] 10.244.2.2:46987 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000135522s
	[INFO] 10.244.2.2:49412 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111732s
	[INFO] 10.244.0.4:60902 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000080561s
	[INFO] 10.244.0.4:46371 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000144185s
	[INFO] 10.244.1.2:42114 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000113886s
	[INFO] 10.244.1.2:42251 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000112767s
	[INFO] 10.244.1.2:32914 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121034s
	[INFO] 10.244.2.2:44583 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109669s
	[INFO] 10.244.2.2:43221 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000085803s
	
	
	==> coredns [4e3cdc51f7ed094ed5462f11f5cff75cb3d6d30639d4cb6e22c13a07f94a6433] <==
	[INFO] 10.244.2.2:37253 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000085955s
	[INFO] 10.244.2.2:39927 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001720898s
	[INFO] 10.244.0.4:36135 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114066s
	[INFO] 10.244.0.4:39115 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004365932s
	[INFO] 10.244.0.4:43055 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088303s
	[INFO] 10.244.1.2:42061 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106711s
	[INFO] 10.244.1.2:53460 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001350422s
	[INFO] 10.244.1.2:37958 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000120338s
	[INFO] 10.244.1.2:41663 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000057406s
	[INFO] 10.244.2.2:39895 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001764061s
	[INFO] 10.244.2.2:32797 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000140596s
	[INFO] 10.244.2.2:41374 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014218s
	[INFO] 10.244.2.2:38111 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001248331s
	[INFO] 10.244.2.2:40563 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000167068s
	[INFO] 10.244.2.2:35228 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000156257s
	[INFO] 10.244.0.4:60265 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000081089s
	[INFO] 10.244.0.4:46991 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004503s
	[INFO] 10.244.1.2:54120 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097136s
	[INFO] 10.244.2.2:36674 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166666s
	[INFO] 10.244.2.2:45715 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115611s
	[INFO] 10.244.0.4:49067 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123058s
	[INFO] 10.244.0.4:53779 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000117437s
	[INFO] 10.244.1.2:35193 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133641s
	[INFO] 10.244.2.2:48409 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000071901s
	[INFO] 10.244.2.2:34803 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000085512s
	
	
	==> describe nodes <==
	Name:               ha-269108
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-269108
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221
	                    minikube.k8s.io/name=ha-269108
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T18_31_14_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 18:31:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-269108
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 18:38:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 18:34:17 +0000   Fri, 19 Jul 2024 18:31:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 18:34:17 +0000   Fri, 19 Jul 2024 18:31:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 18:34:17 +0000   Fri, 19 Jul 2024 18:31:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 18:34:17 +0000   Fri, 19 Jul 2024 18:31:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.163
	  Hostname:    ha-269108
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e79eb460340347cf89b27475ae92f17a
	  System UUID:                e79eb460-3403-47cf-89b2-7475ae92f17a
	  Boot ID:                    54c61684-08fe-4593-a142-b4110f9ebe97
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8qqtr              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m40s
	  kube-system                 coredns-7db6d8ff4d-frb69             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m11s
	  kube-system                 coredns-7db6d8ff4d-zxh8h             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m11s
	  kube-system                 etcd-ha-269108                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m24s
	  kube-system                 kindnet-4lhkr                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m11s
	  kube-system                 kube-apiserver-ha-269108             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m24s
	  kube-system                 kube-controller-manager-ha-269108    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m24s
	  kube-system                 kube-proxy-pkjns                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m11s
	  kube-system                 kube-scheduler-ha-269108             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m24s
	  kube-system                 kube-vip-ha-269108                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m24s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m9s   kube-proxy       
	  Normal  Starting                 7m24s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m24s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m24s  kubelet          Node ha-269108 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m24s  kubelet          Node ha-269108 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m24s  kubelet          Node ha-269108 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m11s  node-controller  Node ha-269108 event: Registered Node ha-269108 in Controller
	  Normal  NodeReady                6m55s  kubelet          Node ha-269108 status is now: NodeReady
	  Normal  RegisteredNode           6m7s   node-controller  Node ha-269108 event: Registered Node ha-269108 in Controller
	  Normal  RegisteredNode           4m50s  node-controller  Node ha-269108 event: Registered Node ha-269108 in Controller
	
	
	Name:               ha-269108-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-269108-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221
	                    minikube.k8s.io/name=ha-269108
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T18_32_16_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 18:32:13 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-269108-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 18:35:06 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 19 Jul 2024 18:34:16 +0000   Fri, 19 Jul 2024 18:35:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 19 Jul 2024 18:34:16 +0000   Fri, 19 Jul 2024 18:35:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 19 Jul 2024 18:34:16 +0000   Fri, 19 Jul 2024 18:35:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 19 Jul 2024 18:34:16 +0000   Fri, 19 Jul 2024 18:35:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.87
	  Hostname:    ha-269108-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bd8856bcc7e74f5db8563df427c95529
	  System UUID:                bd8856bc-c7e7-4f5d-b856-3df427c95529
	  Boot ID:                    f0c82245-88b6-4481-bd3f-181d903ffe04
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hds6j                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m40s
	  kube-system                 etcd-ha-269108-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m22s
	  kube-system                 kindnet-fs5wk                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m24s
	  kube-system                 kube-apiserver-ha-269108-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m23s
	  kube-system                 kube-controller-manager-ha-269108-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m23s
	  kube-system                 kube-proxy-vr7bz                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m24s
	  kube-system                 kube-scheduler-ha-269108-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m23s
	  kube-system                 kube-vip-ha-269108-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m20s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m24s (x8 over 6m24s)  kubelet          Node ha-269108-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m24s (x8 over 6m24s)  kubelet          Node ha-269108-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m24s (x7 over 6m24s)  kubelet          Node ha-269108-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m21s                  node-controller  Node ha-269108-m02 event: Registered Node ha-269108-m02 in Controller
	  Normal  RegisteredNode           6m7s                   node-controller  Node ha-269108-m02 event: Registered Node ha-269108-m02 in Controller
	  Normal  RegisteredNode           4m50s                  node-controller  Node ha-269108-m02 event: Registered Node ha-269108-m02 in Controller
	  Normal  NodeNotReady             2m50s                  node-controller  Node ha-269108-m02 status is now: NodeNotReady
	
	
	Name:               ha-269108-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-269108-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221
	                    minikube.k8s.io/name=ha-269108
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T18_33_32_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 18:33:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-269108-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 18:38:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 18:34:30 +0000   Fri, 19 Jul 2024 18:33:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 18:34:30 +0000   Fri, 19 Jul 2024 18:33:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 18:34:30 +0000   Fri, 19 Jul 2024 18:33:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 18:34:30 +0000   Fri, 19 Jul 2024 18:33:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.111
	  Hostname:    ha-269108-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c6e00e6f38a04bfdb40d7ea3e2c16e0c
	  System UUID:                c6e00e6f-38a0-4bfd-b40d-7ea3e2c16e0c
	  Boot ID:                    42ac80d5-69e5-48d1-95cb-2df50a1cc367
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-lrphf                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m40s
	  kube-system                 etcd-ha-269108-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m7s
	  kube-system                 kindnet-bmcvm                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m9s
	  kube-system                 kube-apiserver-ha-269108-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m8s
	  kube-system                 kube-controller-manager-ha-269108-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m8s
	  kube-system                 kube-proxy-9zpp2                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m9s
	  kube-system                 kube-scheduler-ha-269108-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m8s
	  kube-system                 kube-vip-ha-269108-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m4s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  5m9s (x8 over 5m9s)  kubelet          Node ha-269108-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m9s (x8 over 5m9s)  kubelet          Node ha-269108-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m9s (x7 over 5m9s)  kubelet          Node ha-269108-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m7s                 node-controller  Node ha-269108-m03 event: Registered Node ha-269108-m03 in Controller
	  Normal  RegisteredNode           5m6s                 node-controller  Node ha-269108-m03 event: Registered Node ha-269108-m03 in Controller
	  Normal  RegisteredNode           4m50s                node-controller  Node ha-269108-m03 event: Registered Node ha-269108-m03 in Controller
	
	
	Name:               ha-269108-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-269108-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221
	                    minikube.k8s.io/name=ha-269108
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T18_34_34_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 18:34:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-269108-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 18:38:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 18:35:04 +0000   Fri, 19 Jul 2024 18:34:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 18:35:04 +0000   Fri, 19 Jul 2024 18:34:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 18:35:04 +0000   Fri, 19 Jul 2024 18:34:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 18:35:04 +0000   Fri, 19 Jul 2024 18:34:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.17
	  Hostname:    ha-269108-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 db189232948d49d08567fb37977226c0
	  System UUID:                db189232-948d-49d0-8567-fb37977226c0
	  Boot ID:                    f229708f-8e4e-40a7-ad94-1f7eb3a46ee7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2cplc       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m3s
	  kube-system                 kube-proxy-qfjxp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m57s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m3s (x2 over 4m3s)  kubelet          Node ha-269108-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m3s (x2 over 4m3s)  kubelet          Node ha-269108-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m3s (x2 over 4m3s)  kubelet          Node ha-269108-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m2s                 node-controller  Node ha-269108-m04 event: Registered Node ha-269108-m04 in Controller
	  Normal  RegisteredNode           4m1s                 node-controller  Node ha-269108-m04 event: Registered Node ha-269108-m04 in Controller
	  Normal  RegisteredNode           4m                   node-controller  Node ha-269108-m04 event: Registered Node ha-269108-m04 in Controller
	  Normal  NodeReady                3m43s                kubelet          Node ha-269108-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul19 18:30] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050935] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037004] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.447258] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.741619] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.433325] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000011] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +11.238779] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.059116] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053977] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.173547] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.128447] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.273873] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[Jul19 18:31] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +3.987459] systemd-fstab-generator[944]: Ignoring "noauto" option for root device
	[  +0.062161] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.240611] systemd-fstab-generator[1364]: Ignoring "noauto" option for root device
	[  +0.081692] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.045525] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.095803] kauditd_printk_skb: 36 callbacks suppressed
	[Jul19 18:32] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [65cdebd7fead61555f3eb344ede3b87169b46a3d890899b185210a547160c51d] <==
	{"level":"warn","ts":"2024-07-19T18:38:37.459054Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:38:37.558545Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:38:37.622094Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:38:37.631211Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:38:37.6348Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:38:37.647791Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:38:37.65462Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:38:37.658753Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:38:37.660926Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:38:37.664336Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:38:37.667497Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:38:37.676839Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:38:37.683427Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:38:37.690097Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:38:37.693063Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:38:37.696133Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:38:37.702326Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:38:37.707497Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:38:37.713253Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:38:37.716454Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:38:37.718713Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:38:37.723531Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:38:37.728454Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:38:37.733877Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:38:37.758503Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:38:37 up 8 min,  0 users,  load average: 0.06, 0.14, 0.08
	Linux ha-269108 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1b0808457c5c75d11706b6ffd7814fd4e46226d94c503fb77c9aa9c3739be671] <==
	I0719 18:38:02.393510       1 main.go:322] Node ha-269108-m02 has CIDR [10.244.1.0/24] 
	I0719 18:38:12.387296       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0719 18:38:12.387346       1 main.go:322] Node ha-269108-m02 has CIDR [10.244.1.0/24] 
	I0719 18:38:12.387489       1 main.go:295] Handling node with IPs: map[192.168.39.111:{}]
	I0719 18:38:12.387508       1 main.go:322] Node ha-269108-m03 has CIDR [10.244.2.0/24] 
	I0719 18:38:12.387569       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0719 18:38:12.387587       1 main.go:322] Node ha-269108-m04 has CIDR [10.244.3.0/24] 
	I0719 18:38:12.387651       1 main.go:295] Handling node with IPs: map[192.168.39.163:{}]
	I0719 18:38:12.387658       1 main.go:299] handling current node
	I0719 18:38:22.387326       1 main.go:295] Handling node with IPs: map[192.168.39.163:{}]
	I0719 18:38:22.387491       1 main.go:299] handling current node
	I0719 18:38:22.387542       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0719 18:38:22.387561       1 main.go:322] Node ha-269108-m02 has CIDR [10.244.1.0/24] 
	I0719 18:38:22.387739       1 main.go:295] Handling node with IPs: map[192.168.39.111:{}]
	I0719 18:38:22.387763       1 main.go:322] Node ha-269108-m03 has CIDR [10.244.2.0/24] 
	I0719 18:38:22.387829       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0719 18:38:22.387848       1 main.go:322] Node ha-269108-m04 has CIDR [10.244.3.0/24] 
	I0719 18:38:32.385925       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0719 18:38:32.386036       1 main.go:322] Node ha-269108-m04 has CIDR [10.244.3.0/24] 
	I0719 18:38:32.386313       1 main.go:295] Handling node with IPs: map[192.168.39.163:{}]
	I0719 18:38:32.386389       1 main.go:299] handling current node
	I0719 18:38:32.386430       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0719 18:38:32.386503       1 main.go:322] Node ha-269108-m02 has CIDR [10.244.1.0/24] 
	I0719 18:38:32.386630       1 main.go:295] Handling node with IPs: map[192.168.39.111:{}]
	I0719 18:38:32.386672       1 main.go:322] Node ha-269108-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [d7d66135b54e45d783390d42b37d07758fbd3c77fef3397a9f916668a18a5cdf] <==
	I0719 18:31:11.919680       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0719 18:31:13.112936       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0719 18:31:13.131569       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0719 18:31:13.320336       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0719 18:31:26.483972       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0719 18:31:26.553397       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0719 18:33:29.681799       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0719 18:33:29.681870       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0719 18:33:29.681912       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 5.742µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0719 18:33:29.683535       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0719 18:33:29.683721       1 timeout.go:142] post-timeout activity - time-elapsed: 2.081477ms, POST "/api/v1/namespaces/kube-system/events" result: <nil>
	E0719 18:34:02.629460       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46846: use of closed network connection
	E0719 18:34:02.795682       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46866: use of closed network connection
	E0719 18:34:02.974784       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46880: use of closed network connection
	E0719 18:34:03.169331       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46894: use of closed network connection
	E0719 18:34:03.359531       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46918: use of closed network connection
	E0719 18:34:03.716413       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46958: use of closed network connection
	E0719 18:34:03.881836       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46976: use of closed network connection
	E0719 18:34:04.055280       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47002: use of closed network connection
	E0719 18:34:04.324120       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47022: use of closed network connection
	E0719 18:34:04.486741       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47032: use of closed network connection
	E0719 18:34:04.671513       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47050: use of closed network connection
	E0719 18:34:04.834903       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47062: use of closed network connection
	E0719 18:34:05.025312       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47086: use of closed network connection
	E0719 18:34:05.196710       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47116: use of closed network connection
	
	
	==> kube-controller-manager [c3d91756bd0168a43e8b6f1ae9046710c68f13345e57b69b224b80195614364f] <==
	I0719 18:33:57.872313       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.69264ms"
	I0719 18:33:57.872572       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.58µs"
	I0719 18:33:57.894061       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="90.786µs"
	I0719 18:33:57.894883       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.077µs"
	I0719 18:33:57.902541       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.256µs"
	I0719 18:33:58.131958       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="225.373822ms"
	I0719 18:33:58.157966       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.887812ms"
	I0719 18:33:58.228064       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.944763ms"
	I0719 18:33:58.230254       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="486.306µs"
	I0719 18:33:58.291133       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.672145ms"
	I0719 18:33:58.291281       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.602µs"
	I0719 18:33:58.803730       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.691µs"
	I0719 18:34:01.668382       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.33585ms"
	I0719 18:34:01.669328       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.182µs"
	I0719 18:34:02.037972       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.125574ms"
	I0719 18:34:02.038220       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="114.779µs"
	E0719 18:34:34.060492       1 certificate_controller.go:146] Sync csr-j4jgq failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-j4jgq": the object has been modified; please apply your changes to the latest version and try again
	E0719 18:34:34.076540       1 certificate_controller.go:146] Sync csr-j4jgq failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-j4jgq": the object has been modified; please apply your changes to the latest version and try again
	I0719 18:34:34.333735       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-269108-m04\" does not exist"
	I0719 18:34:34.354537       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-269108-m04" podCIDRs=["10.244.3.0/24"]
	I0719 18:34:36.648737       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-269108-m04"
	I0719 18:34:54.964090       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-269108-m04"
	I0719 18:35:47.596523       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-269108-m04"
	I0719 18:35:47.632838       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.338485ms"
	I0719 18:35:47.634539       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.389µs"
	
	
	==> kube-proxy [61f7a66979140fe738b08e527164c2369f41fd064fa704fc9fee868074b4a452] <==
	I0719 18:31:27.588679       1 server_linux.go:69] "Using iptables proxy"
	I0719 18:31:27.622593       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.163"]
	I0719 18:31:27.666787       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 18:31:27.666836       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 18:31:27.666853       1 server_linux.go:165] "Using iptables Proxier"
	I0719 18:31:27.669703       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 18:31:27.669988       1 server.go:872] "Version info" version="v1.30.3"
	I0719 18:31:27.670014       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 18:31:27.672653       1 config.go:192] "Starting service config controller"
	I0719 18:31:27.672963       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 18:31:27.673019       1 config.go:101] "Starting endpoint slice config controller"
	I0719 18:31:27.673025       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 18:31:27.674382       1 config.go:319] "Starting node config controller"
	I0719 18:31:27.676374       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 18:31:27.773740       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 18:31:27.773795       1 shared_informer.go:320] Caches are synced for service config
	I0719 18:31:27.777532       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [00761e9db17ecfad96376a93ad36118da3b611230214d3f6b730aed8037c9f74] <==
	I0719 18:31:12.952590       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0719 18:33:28.922708       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-djpx2\": pod kube-proxy-djpx2 is already assigned to node \"ha-269108-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-djpx2" node="ha-269108-m03"
	E0719 18:33:28.922873       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-djpx2\": pod kube-proxy-djpx2 is already assigned to node \"ha-269108-m03\"" pod="kube-system/kube-proxy-djpx2"
	I0719 18:33:28.922939       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-djpx2" node="ha-269108-m03"
	E0719 18:33:28.998782       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-9zpp2\": pod kube-proxy-9zpp2 is already assigned to node \"ha-269108-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-9zpp2" node="ha-269108-m03"
	E0719 18:33:28.998854       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 395be6bf-b8bd-4848-8550-cbc2e647c238(kube-system/kube-proxy-9zpp2) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-9zpp2"
	E0719 18:33:28.998884       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-9zpp2\": pod kube-proxy-9zpp2 is already assigned to node \"ha-269108-m03\"" pod="kube-system/kube-proxy-9zpp2"
	I0719 18:33:28.998918       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-9zpp2" node="ha-269108-m03"
	I0719 18:33:57.789554       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="baf3cafd-8ad7-4c5c-8143-e17b5aa2aa06" pod="default/busybox-fc5497c4f-lrphf" assumedNode="ha-269108-m03" currentNode="ha-269108-m02"
	E0719 18:33:57.795876       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-lrphf\": pod busybox-fc5497c4f-lrphf is already assigned to node \"ha-269108-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-lrphf" node="ha-269108-m02"
	E0719 18:33:57.795937       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod baf3cafd-8ad7-4c5c-8143-e17b5aa2aa06(default/busybox-fc5497c4f-lrphf) was assumed on ha-269108-m02 but assigned to ha-269108-m03" pod="default/busybox-fc5497c4f-lrphf"
	E0719 18:33:57.795953       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-lrphf\": pod busybox-fc5497c4f-lrphf is already assigned to node \"ha-269108-m03\"" pod="default/busybox-fc5497c4f-lrphf"
	I0719 18:33:57.796029       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-lrphf" node="ha-269108-m03"
	E0719 18:34:34.397980       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-2cplc\": pod kindnet-2cplc is already assigned to node \"ha-269108-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-2cplc" node="ha-269108-m04"
	E0719 18:34:34.398063       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod edf968d4-a9f2-4b87-b9a7-bb5dfdd8ded6(kube-system/kindnet-2cplc) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-2cplc"
	E0719 18:34:34.398115       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-2cplc\": pod kindnet-2cplc is already assigned to node \"ha-269108-m04\"" pod="kube-system/kindnet-2cplc"
	I0719 18:34:34.398188       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-2cplc" node="ha-269108-m04"
	E0719 18:34:34.469601       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-tpwth\": pod kindnet-tpwth is already assigned to node \"ha-269108-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-tpwth" node="ha-269108-m04"
	E0719 18:34:34.469675       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 79321755-810f-4828-b33e-6b74811f37ce(kube-system/kindnet-tpwth) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-tpwth"
	E0719 18:34:34.469699       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-tpwth\": pod kindnet-tpwth is already assigned to node \"ha-269108-m04\"" pod="kube-system/kindnet-tpwth"
	I0719 18:34:34.469717       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-tpwth" node="ha-269108-m04"
	E0719 18:34:34.607244       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-p8sld\": pod kube-proxy-p8sld is already assigned to node \"ha-269108-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-p8sld" node="ha-269108-m04"
	E0719 18:34:34.607398       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod c4817e36-a7dc-4c67-86e6-496a1164dfc7(kube-system/kube-proxy-p8sld) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-p8sld"
	E0719 18:34:34.607511       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-p8sld\": pod kube-proxy-p8sld is already assigned to node \"ha-269108-m04\"" pod="kube-system/kube-proxy-p8sld"
	I0719 18:34:34.607580       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-p8sld" node="ha-269108-m04"
	
	
	==> kubelet <==
	Jul 19 18:34:13 ha-269108 kubelet[1371]: E0719 18:34:13.285224    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 18:34:13 ha-269108 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 18:34:13 ha-269108 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 18:34:13 ha-269108 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 18:34:13 ha-269108 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 18:35:13 ha-269108 kubelet[1371]: E0719 18:35:13.274705    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 18:35:13 ha-269108 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 18:35:13 ha-269108 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 18:35:13 ha-269108 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 18:35:13 ha-269108 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 18:36:13 ha-269108 kubelet[1371]: E0719 18:36:13.275329    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 18:36:13 ha-269108 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 18:36:13 ha-269108 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 18:36:13 ha-269108 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 18:36:13 ha-269108 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 18:37:13 ha-269108 kubelet[1371]: E0719 18:37:13.274973    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 18:37:13 ha-269108 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 18:37:13 ha-269108 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 18:37:13 ha-269108 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 18:37:13 ha-269108 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 18:38:13 ha-269108 kubelet[1371]: E0719 18:38:13.277376    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 18:38:13 ha-269108 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 18:38:13 ha-269108 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 18:38:13 ha-269108 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 18:38:13 ha-269108 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-269108 -n ha-269108
helpers_test.go:261: (dbg) Run:  kubectl --context ha-269108 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (58.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (356.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-269108 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-269108 -v=7 --alsologtostderr
E0719 18:39:43.525001   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/functional-788436/client.crt: no such file or directory
E0719 18:40:11.208618   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/functional-788436/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-269108 -v=7 --alsologtostderr: exit status 82 (2m1.812200941s)

                                                
                                                
-- stdout --
	* Stopping node "ha-269108-m04"  ...
	* Stopping node "ha-269108-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 18:38:39.160240   38971 out.go:291] Setting OutFile to fd 1 ...
	I0719 18:38:39.160347   38971 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:38:39.160356   38971 out.go:304] Setting ErrFile to fd 2...
	I0719 18:38:39.160360   38971 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:38:39.160515   38971 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 18:38:39.160740   38971 out.go:298] Setting JSON to false
	I0719 18:38:39.160839   38971 mustload.go:65] Loading cluster: ha-269108
	I0719 18:38:39.161172   38971 config.go:182] Loaded profile config "ha-269108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:38:39.161252   38971 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/config.json ...
	I0719 18:38:39.161438   38971 mustload.go:65] Loading cluster: ha-269108
	I0719 18:38:39.161602   38971 config.go:182] Loaded profile config "ha-269108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:38:39.161634   38971 stop.go:39] StopHost: ha-269108-m04
	I0719 18:38:39.161981   38971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:39.162049   38971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:39.181596   38971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45829
	I0719 18:38:39.182131   38971 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:39.182727   38971 main.go:141] libmachine: Using API Version  1
	I0719 18:38:39.182756   38971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:39.183152   38971 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:39.185245   38971 out.go:177] * Stopping node "ha-269108-m04"  ...
	I0719 18:38:39.186456   38971 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0719 18:38:39.186492   38971 main.go:141] libmachine: (ha-269108-m04) Calling .DriverName
	I0719 18:38:39.186712   38971 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0719 18:38:39.186736   38971 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHHostname
	I0719 18:38:39.189816   38971 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:38:39.190137   38971 main.go:141] libmachine: (ha-269108-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:c4:15", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:34:19 +0000 UTC Type:0 Mac:52:54:00:44:c4:15 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-269108-m04 Clientid:01:52:54:00:44:c4:15}
	I0719 18:38:39.190169   38971 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:38:39.190290   38971 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHPort
	I0719 18:38:39.190448   38971 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHKeyPath
	I0719 18:38:39.190594   38971 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHUsername
	I0719 18:38:39.190730   38971 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m04/id_rsa Username:docker}
	I0719 18:38:39.278376   38971 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0719 18:38:39.332714   38971 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0719 18:38:39.386015   38971 main.go:141] libmachine: Stopping "ha-269108-m04"...
	I0719 18:38:39.386055   38971 main.go:141] libmachine: (ha-269108-m04) Calling .GetState
	I0719 18:38:39.387696   38971 main.go:141] libmachine: (ha-269108-m04) Calling .Stop
	I0719 18:38:39.391341   38971 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 0/120
	I0719 18:38:40.516745   38971 main.go:141] libmachine: (ha-269108-m04) Calling .GetState
	I0719 18:38:40.517931   38971 main.go:141] libmachine: Machine "ha-269108-m04" was stopped.
	I0719 18:38:40.517950   38971 stop.go:75] duration metric: took 1.331493867s to stop
	I0719 18:38:40.517987   38971 stop.go:39] StopHost: ha-269108-m03
	I0719 18:38:40.518316   38971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:38:40.518367   38971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:38:40.532609   38971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41647
	I0719 18:38:40.533096   38971 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:38:40.533588   38971 main.go:141] libmachine: Using API Version  1
	I0719 18:38:40.533608   38971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:38:40.533950   38971 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:38:40.535925   38971 out.go:177] * Stopping node "ha-269108-m03"  ...
	I0719 18:38:40.536964   38971 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0719 18:38:40.536987   38971 main.go:141] libmachine: (ha-269108-m03) Calling .DriverName
	I0719 18:38:40.537176   38971 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0719 18:38:40.537200   38971 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHHostname
	I0719 18:38:40.540049   38971 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:38:40.540483   38971 main.go:141] libmachine: (ha-269108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:60:8c", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:32:52 +0000 UTC Type:0 Mac:52:54:00:d3:60:8c Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-269108-m03 Clientid:01:52:54:00:d3:60:8c}
	I0719 18:38:40.540518   38971 main.go:141] libmachine: (ha-269108-m03) DBG | domain ha-269108-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:d3:60:8c in network mk-ha-269108
	I0719 18:38:40.540683   38971 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHPort
	I0719 18:38:40.540841   38971 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHKeyPath
	I0719 18:38:40.541015   38971 main.go:141] libmachine: (ha-269108-m03) Calling .GetSSHUsername
	I0719 18:38:40.541160   38971 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m03/id_rsa Username:docker}
	I0719 18:38:40.627050   38971 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0719 18:38:40.679925   38971 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0719 18:38:40.734167   38971 main.go:141] libmachine: Stopping "ha-269108-m03"...
	I0719 18:38:40.734194   38971 main.go:141] libmachine: (ha-269108-m03) Calling .GetState
	I0719 18:38:40.735641   38971 main.go:141] libmachine: (ha-269108-m03) Calling .Stop
	I0719 18:38:40.739646   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 0/120
	I0719 18:38:41.740982   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 1/120
	I0719 18:38:42.742456   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 2/120
	I0719 18:38:43.743727   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 3/120
	I0719 18:38:44.745108   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 4/120
	I0719 18:38:45.746858   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 5/120
	I0719 18:38:46.748975   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 6/120
	I0719 18:38:47.750342   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 7/120
	I0719 18:38:48.751569   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 8/120
	I0719 18:38:49.752878   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 9/120
	I0719 18:38:50.754645   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 10/120
	I0719 18:38:51.755895   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 11/120
	I0719 18:38:52.757316   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 12/120
	I0719 18:38:53.758828   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 13/120
	I0719 18:38:54.760168   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 14/120
	I0719 18:38:55.761762   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 15/120
	I0719 18:38:56.762988   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 16/120
	I0719 18:38:57.764482   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 17/120
	I0719 18:38:58.766114   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 18/120
	I0719 18:38:59.767706   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 19/120
	I0719 18:39:00.769411   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 20/120
	I0719 18:39:01.770932   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 21/120
	I0719 18:39:02.772556   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 22/120
	I0719 18:39:03.773869   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 23/120
	I0719 18:39:04.775460   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 24/120
	I0719 18:39:05.777129   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 25/120
	I0719 18:39:06.778570   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 26/120
	I0719 18:39:07.779822   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 27/120
	I0719 18:39:08.781383   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 28/120
	I0719 18:39:09.782731   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 29/120
	I0719 18:39:10.784937   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 30/120
	I0719 18:39:11.786168   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 31/120
	I0719 18:39:12.787576   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 32/120
	I0719 18:39:13.788855   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 33/120
	I0719 18:39:14.790342   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 34/120
	I0719 18:39:15.792095   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 35/120
	I0719 18:39:16.793430   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 36/120
	I0719 18:39:17.794845   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 37/120
	I0719 18:39:18.796270   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 38/120
	I0719 18:39:19.798155   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 39/120
	I0719 18:39:20.799905   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 40/120
	I0719 18:39:21.801129   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 41/120
	I0719 18:39:22.802561   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 42/120
	I0719 18:39:23.803873   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 43/120
	I0719 18:39:24.805121   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 44/120
	I0719 18:39:25.806838   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 45/120
	I0719 18:39:26.808261   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 46/120
	I0719 18:39:27.809535   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 47/120
	I0719 18:39:28.811063   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 48/120
	I0719 18:39:29.812392   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 49/120
	I0719 18:39:30.814151   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 50/120
	I0719 18:39:31.815473   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 51/120
	I0719 18:39:32.816853   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 52/120
	I0719 18:39:33.818297   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 53/120
	I0719 18:39:34.819584   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 54/120
	I0719 18:39:35.821519   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 55/120
	I0719 18:39:36.822950   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 56/120
	I0719 18:39:37.824333   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 57/120
	I0719 18:39:38.826384   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 58/120
	I0719 18:39:39.827643   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 59/120
	I0719 18:39:40.829326   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 60/120
	I0719 18:39:41.830750   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 61/120
	I0719 18:39:42.832061   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 62/120
	I0719 18:39:43.833901   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 63/120
	I0719 18:39:44.835129   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 64/120
	I0719 18:39:45.836850   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 65/120
	I0719 18:39:46.838345   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 66/120
	I0719 18:39:47.839900   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 67/120
	I0719 18:39:48.841362   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 68/120
	I0719 18:39:49.842734   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 69/120
	I0719 18:39:50.845055   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 70/120
	I0719 18:39:51.846515   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 71/120
	I0719 18:39:52.848063   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 72/120
	I0719 18:39:53.849463   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 73/120
	I0719 18:39:54.850806   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 74/120
	I0719 18:39:55.852596   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 75/120
	I0719 18:39:56.853928   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 76/120
	I0719 18:39:57.855307   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 77/120
	I0719 18:39:58.856691   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 78/120
	I0719 18:39:59.858129   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 79/120
	I0719 18:40:00.860318   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 80/120
	I0719 18:40:01.861709   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 81/120
	I0719 18:40:02.863100   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 82/120
	I0719 18:40:03.864387   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 83/120
	I0719 18:40:04.865935   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 84/120
	I0719 18:40:05.867509   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 85/120
	I0719 18:40:06.868804   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 86/120
	I0719 18:40:07.870147   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 87/120
	I0719 18:40:08.871701   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 88/120
	I0719 18:40:09.873083   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 89/120
	I0719 18:40:10.875039   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 90/120
	I0719 18:40:11.877371   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 91/120
	I0719 18:40:12.878813   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 92/120
	I0719 18:40:13.880212   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 93/120
	I0719 18:40:14.881703   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 94/120
	I0719 18:40:15.884031   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 95/120
	I0719 18:40:16.885375   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 96/120
	I0719 18:40:17.886739   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 97/120
	I0719 18:40:18.888174   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 98/120
	I0719 18:40:19.889637   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 99/120
	I0719 18:40:20.891803   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 100/120
	I0719 18:40:21.893803   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 101/120
	I0719 18:40:22.895267   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 102/120
	I0719 18:40:23.896517   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 103/120
	I0719 18:40:24.897934   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 104/120
	I0719 18:40:25.899626   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 105/120
	I0719 18:40:26.900918   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 106/120
	I0719 18:40:27.902297   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 107/120
	I0719 18:40:28.904040   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 108/120
	I0719 18:40:29.906301   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 109/120
	I0719 18:40:30.907735   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 110/120
	I0719 18:40:31.909131   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 111/120
	I0719 18:40:32.910455   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 112/120
	I0719 18:40:33.912662   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 113/120
	I0719 18:40:34.914227   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 114/120
	I0719 18:40:35.916020   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 115/120
	I0719 18:40:36.917312   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 116/120
	I0719 18:40:37.918458   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 117/120
	I0719 18:40:38.919878   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 118/120
	I0719 18:40:39.921189   38971 main.go:141] libmachine: (ha-269108-m03) Waiting for machine to stop 119/120
	I0719 18:40:40.922711   38971 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0719 18:40:40.922769   38971 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0719 18:40:40.924562   38971 out.go:177] 
	W0719 18:40:40.925745   38971 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0719 18:40:40.925756   38971 out.go:239] * 
	* 
	W0719 18:40:40.927871   38971 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 18:40:40.929717   38971 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-269108 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-269108 --wait=true -v=7 --alsologtostderr
E0719 18:42:32.426665   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
E0719 18:43:55.472646   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-269108 --wait=true -v=7 --alsologtostderr: (3m52.437529984s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-269108
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-269108 -n ha-269108
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-269108 logs -n 25: (1.993504999s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-269108 cp ha-269108-m03:/home/docker/cp-test.txt                             | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m02:/home/docker/cp-test_ha-269108-m03_ha-269108-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n                                                                | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n ha-269108-m02 sudo cat                                         | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | /home/docker/cp-test_ha-269108-m03_ha-269108-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-269108 cp ha-269108-m03:/home/docker/cp-test.txt                             | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m04:/home/docker/cp-test_ha-269108-m03_ha-269108-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n                                                                | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n ha-269108-m04 sudo cat                                         | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | /home/docker/cp-test_ha-269108-m03_ha-269108-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-269108 cp testdata/cp-test.txt                                               | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n                                                                | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-269108 cp ha-269108-m04:/home/docker/cp-test.txt                             | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile835163817/001/cp-test_ha-269108-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n                                                                | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-269108 cp ha-269108-m04:/home/docker/cp-test.txt                             | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108:/home/docker/cp-test_ha-269108-m04_ha-269108.txt                      |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n                                                                | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n ha-269108 sudo cat                                             | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | /home/docker/cp-test_ha-269108-m04_ha-269108.txt                                |           |         |         |                     |                     |
	| cp      | ha-269108 cp ha-269108-m04:/home/docker/cp-test.txt                             | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m02:/home/docker/cp-test_ha-269108-m04_ha-269108-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n                                                                | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n ha-269108-m02 sudo cat                                         | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | /home/docker/cp-test_ha-269108-m04_ha-269108-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-269108 cp ha-269108-m04:/home/docker/cp-test.txt                             | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m03:/home/docker/cp-test_ha-269108-m04_ha-269108-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n                                                                | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n ha-269108-m03 sudo cat                                         | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | /home/docker/cp-test_ha-269108-m04_ha-269108-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-269108 node stop m02 -v=7                                                    | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-269108 node start m02 -v=7                                                   | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:37 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-269108 -v=7                                                          | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:38 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-269108 -v=7                                                               | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:38 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-269108 --wait=true -v=7                                                   | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:40 UTC | 19 Jul 24 18:44 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-269108                                                               | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:44 UTC |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 18:40:40
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 18:40:40.971899   39480 out.go:291] Setting OutFile to fd 1 ...
	I0719 18:40:40.971996   39480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:40:40.972000   39480 out.go:304] Setting ErrFile to fd 2...
	I0719 18:40:40.972004   39480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:40:40.972196   39480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 18:40:40.972704   39480 out.go:298] Setting JSON to false
	I0719 18:40:40.973616   39480 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4984,"bootTime":1721409457,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 18:40:40.973667   39480 start.go:139] virtualization: kvm guest
	I0719 18:40:40.975799   39480 out.go:177] * [ha-269108] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 18:40:40.977180   39480 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 18:40:40.977180   39480 notify.go:220] Checking for updates...
	I0719 18:40:40.979721   39480 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 18:40:40.981038   39480 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 18:40:40.982360   39480 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 18:40:40.983556   39480 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 18:40:40.984660   39480 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 18:40:40.986125   39480 config.go:182] Loaded profile config "ha-269108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:40:40.986219   39480 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 18:40:40.986638   39480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:40:40.986678   39480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:40:41.001344   39480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36163
	I0719 18:40:41.001754   39480 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:40:41.002395   39480 main.go:141] libmachine: Using API Version  1
	I0719 18:40:41.002416   39480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:40:41.002728   39480 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:40:41.002927   39480 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:40:41.038177   39480 out.go:177] * Using the kvm2 driver based on existing profile
	I0719 18:40:41.039406   39480 start.go:297] selected driver: kvm2
	I0719 18:40:41.039430   39480 start.go:901] validating driver "kvm2" against &{Name:ha-269108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-269108 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.87 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.17 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 18:40:41.039607   39480 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 18:40:41.040057   39480 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 18:40:41.040147   39480 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19307-14841/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 18:40:41.055156   39480 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 18:40:41.056094   39480 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 18:40:41.056171   39480 cni.go:84] Creating CNI manager for ""
	I0719 18:40:41.056187   39480 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0719 18:40:41.056259   39480 start.go:340] cluster config:
	{Name:ha-269108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-269108 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.87 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.17 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 18:40:41.056429   39480 iso.go:125] acquiring lock: {Name:mka126a3710ab8a115eb2a3299b735524430e5f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 18:40:41.058350   39480 out.go:177] * Starting "ha-269108" primary control-plane node in "ha-269108" cluster
	I0719 18:40:41.059535   39480 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 18:40:41.059562   39480 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 18:40:41.059575   39480 cache.go:56] Caching tarball of preloaded images
	I0719 18:40:41.059653   39480 preload.go:172] Found /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 18:40:41.059663   39480 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 18:40:41.059770   39480 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/config.json ...
	I0719 18:40:41.059976   39480 start.go:360] acquireMachinesLock for ha-269108: {Name:mk45aeee801587237bb965fa260501e9fac6f233 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 18:40:41.060017   39480 start.go:364] duration metric: took 24.403µs to acquireMachinesLock for "ha-269108"
	I0719 18:40:41.060030   39480 start.go:96] Skipping create...Using existing machine configuration
	I0719 18:40:41.060035   39480 fix.go:54] fixHost starting: 
	I0719 18:40:41.060278   39480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:40:41.060313   39480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:40:41.073685   39480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34381
	I0719 18:40:41.074114   39480 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:40:41.074548   39480 main.go:141] libmachine: Using API Version  1
	I0719 18:40:41.074589   39480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:40:41.074904   39480 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:40:41.075076   39480 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:40:41.075219   39480 main.go:141] libmachine: (ha-269108) Calling .GetState
	I0719 18:40:41.076660   39480 fix.go:112] recreateIfNeeded on ha-269108: state=Running err=<nil>
	W0719 18:40:41.076676   39480 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 18:40:41.078553   39480 out.go:177] * Updating the running kvm2 "ha-269108" VM ...
	I0719 18:40:41.079604   39480 machine.go:94] provisionDockerMachine start ...
	I0719 18:40:41.079620   39480 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:40:41.079798   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:40:41.082055   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:40:41.082488   39480 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:40:41.082512   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:40:41.082645   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:40:41.082793   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:40:41.082954   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:40:41.083050   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:40:41.083191   39480 main.go:141] libmachine: Using SSH client type: native
	I0719 18:40:41.083381   39480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0719 18:40:41.083392   39480 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 18:40:41.184645   39480 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-269108
	
	I0719 18:40:41.184687   39480 main.go:141] libmachine: (ha-269108) Calling .GetMachineName
	I0719 18:40:41.184939   39480 buildroot.go:166] provisioning hostname "ha-269108"
	I0719 18:40:41.184962   39480 main.go:141] libmachine: (ha-269108) Calling .GetMachineName
	I0719 18:40:41.185135   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:40:41.187916   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:40:41.188453   39480 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:40:41.188483   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:40:41.188663   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:40:41.188869   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:40:41.189146   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:40:41.189290   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:40:41.189496   39480 main.go:141] libmachine: Using SSH client type: native
	I0719 18:40:41.189721   39480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0719 18:40:41.189737   39480 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-269108 && echo "ha-269108" | sudo tee /etc/hostname
	I0719 18:40:41.307980   39480 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-269108
	
	I0719 18:40:41.308006   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:40:41.310504   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:40:41.310879   39480 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:40:41.310907   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:40:41.311080   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:40:41.311266   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:40:41.311400   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:40:41.311505   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:40:41.311815   39480 main.go:141] libmachine: Using SSH client type: native
	I0719 18:40:41.311997   39480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0719 18:40:41.312014   39480 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-269108' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-269108/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-269108' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 18:40:41.412347   39480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 18:40:41.412379   39480 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 18:40:41.412412   39480 buildroot.go:174] setting up certificates
	I0719 18:40:41.412425   39480 provision.go:84] configureAuth start
	I0719 18:40:41.412438   39480 main.go:141] libmachine: (ha-269108) Calling .GetMachineName
	I0719 18:40:41.412695   39480 main.go:141] libmachine: (ha-269108) Calling .GetIP
	I0719 18:40:41.415205   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:40:41.415553   39480 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:40:41.415573   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:40:41.415947   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:40:41.418305   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:40:41.418614   39480 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:40:41.418646   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:40:41.418784   39480 provision.go:143] copyHostCerts
	I0719 18:40:41.418819   39480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 18:40:41.418861   39480 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 18:40:41.418870   39480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 18:40:41.418932   39480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 18:40:41.419025   39480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 18:40:41.419043   39480 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 18:40:41.419050   39480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 18:40:41.419074   39480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 18:40:41.419130   39480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 18:40:41.419158   39480 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 18:40:41.419166   39480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 18:40:41.419196   39480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 18:40:41.419264   39480 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.ha-269108 san=[127.0.0.1 192.168.39.163 ha-269108 localhost minikube]
	I0719 18:40:41.484308   39480 provision.go:177] copyRemoteCerts
	I0719 18:40:41.484374   39480 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 18:40:41.484397   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:40:41.486893   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:40:41.487165   39480 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:40:41.487188   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:40:41.487382   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:40:41.487588   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:40:41.487739   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:40:41.487878   39480 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	I0719 18:40:41.566797   39480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0719 18:40:41.566856   39480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 18:40:41.594675   39480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0719 18:40:41.594757   39480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 18:40:41.618700   39480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0719 18:40:41.618780   39480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0719 18:40:41.642104   39480 provision.go:87] duration metric: took 229.651846ms to configureAuth
	I0719 18:40:41.642132   39480 buildroot.go:189] setting minikube options for container-runtime
	I0719 18:40:41.642349   39480 config.go:182] Loaded profile config "ha-269108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:40:41.642429   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:40:41.645394   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:40:41.645746   39480 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:40:41.645778   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:40:41.645978   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:40:41.646175   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:40:41.646357   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:40:41.646532   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:40:41.646676   39480 main.go:141] libmachine: Using SSH client type: native
	I0719 18:40:41.646848   39480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0719 18:40:41.646862   39480 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 18:42:12.376387   39480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 18:42:12.376420   39480 machine.go:97] duration metric: took 1m31.296804013s to provisionDockerMachine
	I0719 18:42:12.376434   39480 start.go:293] postStartSetup for "ha-269108" (driver="kvm2")
	I0719 18:42:12.376447   39480 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 18:42:12.376469   39480 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:42:12.376843   39480 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 18:42:12.376870   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:42:12.379922   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:42:12.380324   39480 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:42:12.380351   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:42:12.380498   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:42:12.380695   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:42:12.380908   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:42:12.381163   39480 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	I0719 18:42:12.463686   39480 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 18:42:12.467642   39480 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 18:42:12.467667   39480 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 18:42:12.467745   39480 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 18:42:12.467884   39480 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 18:42:12.467898   39480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> /etc/ssl/certs/220282.pem
	I0719 18:42:12.468010   39480 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 18:42:12.477453   39480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 18:42:12.500252   39480 start.go:296] duration metric: took 123.795057ms for postStartSetup
	I0719 18:42:12.500297   39480 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:42:12.500606   39480 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0719 18:42:12.500641   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:42:12.503042   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:42:12.503403   39480 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:42:12.503429   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:42:12.503577   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:42:12.503746   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:42:12.504009   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:42:12.504165   39480 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	W0719 18:42:12.582005   39480 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0719 18:42:12.582033   39480 fix.go:56] duration metric: took 1m31.521997605s for fixHost
	I0719 18:42:12.582052   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:42:12.584714   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:42:12.585097   39480 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:42:12.585136   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:42:12.585323   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:42:12.585512   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:42:12.585666   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:42:12.585806   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:42:12.585974   39480 main.go:141] libmachine: Using SSH client type: native
	I0719 18:42:12.586129   39480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0719 18:42:12.586138   39480 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 18:42:12.695290   39480 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721414532.655233901
	
	I0719 18:42:12.695314   39480 fix.go:216] guest clock: 1721414532.655233901
	I0719 18:42:12.695326   39480 fix.go:229] Guest: 2024-07-19 18:42:12.655233901 +0000 UTC Remote: 2024-07-19 18:42:12.582040262 +0000 UTC m=+91.643952274 (delta=73.193639ms)
	I0719 18:42:12.695350   39480 fix.go:200] guest clock delta is within tolerance: 73.193639ms
	I0719 18:42:12.695356   39480 start.go:83] releasing machines lock for "ha-269108", held for 1m31.635330658s
	I0719 18:42:12.695382   39480 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:42:12.695645   39480 main.go:141] libmachine: (ha-269108) Calling .GetIP
	I0719 18:42:12.698257   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:42:12.698699   39480 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:42:12.698728   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:42:12.698903   39480 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:42:12.699441   39480 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:42:12.699625   39480 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:42:12.699726   39480 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 18:42:12.699772   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:42:12.699830   39480 ssh_runner.go:195] Run: cat /version.json
	I0719 18:42:12.699862   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:42:12.702370   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:42:12.702652   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:42:12.702809   39480 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:42:12.702832   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:42:12.702983   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:42:12.702999   39480 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:42:12.703020   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:42:12.703148   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:42:12.703195   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:42:12.703297   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:42:12.703360   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:42:12.703442   39480 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	I0719 18:42:12.703849   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:42:12.703981   39480 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	I0719 18:42:12.801355   39480 ssh_runner.go:195] Run: systemctl --version
	I0719 18:42:12.850880   39480 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 18:42:13.001635   39480 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 18:42:13.007797   39480 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 18:42:13.007880   39480 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 18:42:13.016557   39480 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0719 18:42:13.016580   39480 start.go:495] detecting cgroup driver to use...
	I0719 18:42:13.016633   39480 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 18:42:13.032290   39480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 18:42:13.045718   39480 docker.go:217] disabling cri-docker service (if available) ...
	I0719 18:42:13.045777   39480 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 18:42:13.058831   39480 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 18:42:13.071706   39480 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 18:42:13.234299   39480 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 18:42:13.383534   39480 docker.go:233] disabling docker service ...
	I0719 18:42:13.383606   39480 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 18:42:13.406654   39480 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 18:42:13.426184   39480 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 18:42:13.566047   39480 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 18:42:13.727243   39480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 18:42:13.742905   39480 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 18:42:13.760883   39480 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 18:42:13.760959   39480 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:42:13.770847   39480 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 18:42:13.770938   39480 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:42:13.780693   39480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:42:13.790613   39480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:42:13.800329   39480 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 18:42:13.810177   39480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:42:13.820509   39480 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:42:13.830482   39480 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:42:13.840826   39480 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 18:42:13.850061   39480 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 18:42:13.860051   39480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 18:42:14.007079   39480 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 18:42:14.286724   39480 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 18:42:14.286789   39480 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 18:42:14.291387   39480 start.go:563] Will wait 60s for crictl version
	I0719 18:42:14.291434   39480 ssh_runner.go:195] Run: which crictl
	I0719 18:42:14.295091   39480 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 18:42:14.335471   39480 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 18:42:14.335546   39480 ssh_runner.go:195] Run: crio --version
	I0719 18:42:14.364798   39480 ssh_runner.go:195] Run: crio --version
	I0719 18:42:14.396039   39480 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 18:42:14.397303   39480 main.go:141] libmachine: (ha-269108) Calling .GetIP
	I0719 18:42:14.399682   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:42:14.400120   39480 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:42:14.400143   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:42:14.400320   39480 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 18:42:14.404823   39480 kubeadm.go:883] updating cluster {Name:ha-269108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-269108 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.87 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.17 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 18:42:14.404988   39480 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 18:42:14.405033   39480 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 18:42:14.447576   39480 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 18:42:14.447595   39480 crio.go:433] Images already preloaded, skipping extraction
	I0719 18:42:14.447636   39480 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 18:42:14.481040   39480 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 18:42:14.481061   39480 cache_images.go:84] Images are preloaded, skipping loading
	I0719 18:42:14.481070   39480 kubeadm.go:934] updating node { 192.168.39.163 8443 v1.30.3 crio true true} ...
	I0719 18:42:14.481170   39480 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-269108 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.163
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-269108 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 18:42:14.481242   39480 ssh_runner.go:195] Run: crio config
	I0719 18:42:14.525151   39480 cni.go:84] Creating CNI manager for ""
	I0719 18:42:14.525176   39480 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0719 18:42:14.525192   39480 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 18:42:14.525226   39480 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.163 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-269108 NodeName:ha-269108 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.163"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.163 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 18:42:14.525377   39480 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.163
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-269108"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.163
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.163"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 18:42:14.525403   39480 kube-vip.go:115] generating kube-vip config ...
	I0719 18:42:14.525537   39480 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0719 18:42:14.536439   39480 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0719 18:42:14.536574   39480 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0719 18:42:14.536630   39480 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 18:42:14.545704   39480 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 18:42:14.545781   39480 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0719 18:42:14.554294   39480 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0719 18:42:14.569627   39480 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 18:42:14.585194   39480 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0719 18:42:14.600863   39480 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0719 18:42:14.616531   39480 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0719 18:42:14.621343   39480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 18:42:14.766040   39480 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 18:42:14.781020   39480 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108 for IP: 192.168.39.163
	I0719 18:42:14.781045   39480 certs.go:194] generating shared ca certs ...
	I0719 18:42:14.781063   39480 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:42:14.781241   39480 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 18:42:14.781285   39480 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 18:42:14.781296   39480 certs.go:256] generating profile certs ...
	I0719 18:42:14.781380   39480 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/client.key
	I0719 18:42:14.781406   39480 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key.552be74c
	I0719 18:42:14.781423   39480 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt.552be74c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.163 192.168.39.87 192.168.39.111 192.168.39.254]
	I0719 18:42:14.993023   39480 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt.552be74c ...
	I0719 18:42:14.993058   39480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt.552be74c: {Name:mk647737b2511a6613bcf2466c374c86204ef3e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:42:14.993248   39480 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key.552be74c ...
	I0719 18:42:14.993265   39480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key.552be74c: {Name:mkc35ee48db630f062b5250f57cf2b3ada419b6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:42:14.993361   39480 certs.go:381] copying /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt.552be74c -> /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt
	I0719 18:42:14.993534   39480 certs.go:385] copying /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key.552be74c -> /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key
	I0719 18:42:14.993696   39480 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.key
	I0719 18:42:14.993714   39480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 18:42:14.993733   39480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0719 18:42:14.993747   39480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 18:42:14.993766   39480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 18:42:14.993781   39480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 18:42:14.993797   39480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 18:42:14.993821   39480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 18:42:14.993876   39480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 18:42:14.993942   39480 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 18:42:14.993984   39480 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 18:42:14.993997   39480 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 18:42:14.994029   39480 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 18:42:14.994072   39480 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 18:42:14.994106   39480 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 18:42:14.994158   39480 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 18:42:14.994193   39480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> /usr/share/ca-certificates/220282.pem
	I0719 18:42:14.994220   39480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 18:42:14.994242   39480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem -> /usr/share/ca-certificates/22028.pem
	I0719 18:42:14.994833   39480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 18:42:15.019216   39480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 18:42:15.042097   39480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 18:42:15.064872   39480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 18:42:15.087820   39480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0719 18:42:15.109471   39480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 18:42:15.131594   39480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 18:42:15.153584   39480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 18:42:15.176046   39480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 18:42:15.197403   39480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 18:42:15.219801   39480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 18:42:15.242935   39480 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 18:42:15.258054   39480 ssh_runner.go:195] Run: openssl version
	I0719 18:42:15.263628   39480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 18:42:15.274482   39480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 18:42:15.279364   39480 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 18:42:15.279416   39480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 18:42:15.284668   39480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 18:42:15.293344   39480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 18:42:15.303047   39480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 18:42:15.307380   39480 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 18:42:15.307430   39480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 18:42:15.312762   39480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 18:42:15.321181   39480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 18:42:15.331537   39480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 18:42:15.335986   39480 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 18:42:15.336039   39480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 18:42:15.341384   39480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 18:42:15.349879   39480 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 18:42:15.354058   39480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 18:42:15.359409   39480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 18:42:15.365505   39480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 18:42:15.371105   39480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 18:42:15.376402   39480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 18:42:15.381581   39480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 18:42:15.386638   39480 kubeadm.go:392] StartCluster: {Name:ha-269108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-269108 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.87 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.17 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 18:42:15.386787   39480 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 18:42:15.386845   39480 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 18:42:15.420628   39480 cri.go:89] found id: "1b4b0cf4bc834e140fe853005153f907336f59ad55a6520464db8afd71250afa"
	I0719 18:42:15.420648   39480 cri.go:89] found id: "61e06264256fd9c46c5db2f17e1bfe6d9458fdb6b286b8b5a73b64244b61c6ea"
	I0719 18:42:15.420652   39480 cri.go:89] found id: "dbb001dbe7be6d9a302c79a50318de53b13d624b2495932e885d40b5c3a53e36"
	I0719 18:42:15.420656   39480 cri.go:89] found id: "4e3cdc51f7ed094ed5462f11f5cff75cb3d6d30639d4cb6e22c13a07f94a6433"
	I0719 18:42:15.420659   39480 cri.go:89] found id: "160fdbf709fdfd0919bdc4cac02da0844e0b17c6e6272326ed4de2e8c824f16a"
	I0719 18:42:15.420662   39480 cri.go:89] found id: "1b0808457c5c75d11706b6ffd7814fd4e46226d94c503fb77c9aa9c3739be671"
	I0719 18:42:15.420665   39480 cri.go:89] found id: "61f7a66979140fe738b08e527164c2369f41fd064fa704fc9fee868074b4a452"
	I0719 18:42:15.420668   39480 cri.go:89] found id: "00761e9db17ecfad96376a93ad36118da3b611230214d3f6b730aed8037c9f74"
	I0719 18:42:15.420670   39480 cri.go:89] found id: "c3d91756bd0168a43e8b6f1ae9046710c68f13345e57b69b224b80195614364f"
	I0719 18:42:15.420676   39480 cri.go:89] found id: "65cdebd7fead61555f3eb344ede3b87169b46a3d890899b185210a547160c51d"
	I0719 18:42:15.420695   39480 cri.go:89] found id: ""
	I0719 18:42:15.420735   39480 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 19 18:44:34 ha-269108 crio[3808]: time="2024-07-19 18:44:34.298707411Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721414674298686542,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c1e5e91b-75c9-452e-92e1-95f26672a2fd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 18:44:34 ha-269108 crio[3808]: time="2024-07-19 18:44:34.299374286Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a5f2b6fa-f921-4be5-81d6-87cba90ce62a name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:44:34 ha-269108 crio[3808]: time="2024-07-19 18:44:34.299431893Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a5f2b6fa-f921-4be5-81d6-87cba90ce62a name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:44:34 ha-269108 crio[3808]: time="2024-07-19 18:44:34.300078036Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e29d9d2d41500bc20420d9caf8ce086e9c6da73f9c9f0a8671a968e0affcc0bf,PodSandboxId:7eeab2b3bad0c4d6e7586d6ad9fc6ce0c0f0429282a496f7a48d87a36952ddcd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721414616272082189,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38aaccd6-4fb9-4744-b502-261ebeaf476a,},Annotations:map[string]string{io.kubernetes.container.hash: 8461d301,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34be3054e3f20da8fc4dd2a1b657f03a54ed610f94ec57efc2a5dfbdf2cde39a,PodSandboxId:62bcad4efd94bca1108500804ec41eef50ea88d4e6cff2bf652f3bedcacb9542,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721414579261633444,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d46eaeb953414d3529b8c9640aa3b7e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6b1a8807,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0f5d4e1754ebac6b06f74389d4db5e5220834e10c045e765af7c6ac0c93bba1,PodSandboxId:cdf045f94dd3e94cdb3b9fb8065fb71a192e22ae8efb808fa7bccc89dc453f26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721414575261965951,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 295bf86c7591ac85ae0841105f8684b3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:820fa6d57d467304530b5aedf8f113e6ebf1c0c817fc0b9a769715d2219876fa,PodSandboxId:d2257256b0b5512cda01326fca07300d928f2889d76d5b38a5288e06096bc230,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721414569530811069,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8qqtr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c0699d4b-e052-40cf-b41c-ab604d64531c,},Annotations:map[string]string{io.kubernetes.container.hash: 5c041215,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8f438a5b11de68af439c31a82cf577b8339e9ab4115915f8f84256b98d21cf4,PodSandboxId:7eeab2b3bad0c4d6e7586d6ad9fc6ce0c0f0429282a496f7a48d87a36952ddcd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721414568264983939,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38aaccd6-4fb9-4744-b502-261ebeaf476a,},Annotations:map[string]string{io.kubernetes.container.hash: 8461d301,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf6013946ad3567c6a2dbcd6b32d3f6d236172aa2cd19f42c8bcfc3114fbd74b,PodSandboxId:135591c805808ce4cb4fbddad10e4d6c528e072f94adb2e39f69c8b1e28dd9f3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721414551291706910,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcefaef2992cd2ee3a947d6974f9e123,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8510f25dcbe4b97ca586c997088a07719140330bfbc0ca10b6ba8aaac2ebe6e,PodSandboxId:b8bac15efc8d4e42b6aed98a9de123b2d10eb77147edb1ff521ac9cc5c252e8f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721414536407869616,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zxh8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1016c72c-a0e0-41fe-8f64-1ba03d7ce1c8,},Annotations:map[string]string{io.kubernetes.container.hash: 7619a7d5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00a3dee6a93924220b2eface2a95a672b43a05ce683a5fa3e15d4a6e0f88a921,PodSandboxId:33491bdbed43991c1f8e942263d87ddfb0ee4b6ff5153a42931df4ef94e12215,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721414536291456893,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4lhkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92211e52-9a67-4b01-a4af-0769596d7103,},Annotations:map[string]string{io.kubernetes.container.hash: a2713f56,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e513814a1603c4b7d88bf130bd6602615f991a041f8bcc423fbaab913b64822,PodSandboxId:0788c2a97f3671154fcab5a8e80fc4b554a92775c43e8543c4ec8330287f0649,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721414536384687335,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-frb69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd2dc6fa-ab54-44e1-b3b1-3e63af76140b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a98894c,io.kubernetes.container.ports: [{\"nam
e\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c99e057b54cd9e137c753fbec9eb0ea6306196975ca9c0f5e4f889daf40a59d,PodSandboxId:08674abf83e5900d70bd790f31552962dfaa5088eac2127184ba4a08460d5eb7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721414536205718187,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-269108,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: dffd3f3edf9c9be1c68e639689e4e06a,},Annotations:map[string]string{io.kubernetes.container.hash: 6829aeb5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81fb866ed689e0dc2eb5f19c1c566cdd0b72053376dc0145a4e981fbd079eb3c,PodSandboxId:cdf045f94dd3e94cdb3b9fb8065fb71a192e22ae8efb808fa7bccc89dc453f26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721414536144949582,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-269108,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 295bf86c7591ac85ae0841105f8684b3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02e7b50680c1dac42f2c682f7ea72ed5143e51f1ecff42c1b0cb04290d516627,PodSandboxId:dcc6afb97407c367e9d3233a79b7b5316b719fc4d53f58596b1a370d2da443ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721414536180254081,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: a30a862431730cad16e447d55a4fb821,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c9be2fa966f212fb25f9c4cabcd726723a1406f5951bbf8b0fc3b0ceb26df7,PodSandboxId:05561f6f664c936be919ff8787423728deeb0966892197b5ed8771b3f4f991ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721414536030037354,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkjns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f97d4b6b-eb80-46c3-9
513-ef6e9af5318e,},Annotations:map[string]string{io.kubernetes.container.hash: b6a228cf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c0f3100700323b35ca91f87077c5ef59192d9768b24ef7730bbb356439b23b,PodSandboxId:62bcad4efd94bca1108500804ec41eef50ea88d4e6cff2bf652f3bedcacb9542,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721414535988012130,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d46eaeb953414d3529b8c9640aa3b7e1,},Ann
otations:map[string]string{io.kubernetes.container.hash: 6b1a8807,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7246971f05346f2dc79fb64d87f9e45c4dbe98d10104f0ef433c848235f3b80,PodSandboxId:8ce5a6a0852e27942f0fc5b9171cce34e14a87ce419545c3842e9353ab6324b8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721414041026695986,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8qqtr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c0699d4b-e052-40cf-b41c-ab604d64531c,},Annot
ations:map[string]string{io.kubernetes.container.hash: 5c041215,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:160fdbf709fdfd0919bdc4cac02da0844e0b17c6e6272326ed4de2e8c824f16a,PodSandboxId:7a1d288c9d262c0228c5bc042b7dcf56aee7adfe173e152f2e29c3f98ad72afa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721413903448938005,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-frb69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd2dc6fa-ab54-44e1-b3b1-3e63af76140b,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a98894c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e3cdc51f7ed094ed5462f11f5cff75cb3d6d30639d4cb6e22c13a07f94a6433,PodSandboxId:3932d100e152d2ea42e6eb0df146808e82dc875697eb60499c0321a97ca99df2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721413903464969133,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zxh8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1016c72c-a0e0-41fe-8f64-1ba03d7ce1c8,},Annotations:map[string]string{io.kubernetes.container.hash: 7619a7d5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b0808457c5c75d11706b6ffd7814fd4e46226d94c503fb77c9aa9c3739be671,PodSandboxId:eb0e9822a0f4e6fa2ff72be57c5b6d30df9a5b7e0c7ee0c4cc4aae2a6faaf91a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721413891286212300,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4lhkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92211e52-9a67-4b01-a4af-0769596d7103,},Annotations:map[string]string{io.kubernetes.container.hash: a2713f56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f7a66979140fe738b08e527164c2369f41fd064fa704fc9fee868074b4a452,PodSandboxId:1f4d197a59cd5ff468ce92a597b19649b7d39296e988d499593484af548c9fb9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721413887206521167,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkjns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f97d4b6b-eb80-46c3-9513-ef6e9af5318e,},Annotations:map[string]string{io.kubernetes.container.hash: b6a228cf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00761e9db17ecfad96376a93ad36118da3b611230214d3f6b730aed8037c9f74,PodSandboxId:9a07a59b0e0e4715d5e80dcbe88bfacb960a6a0b4f96bd8f70828ab198e71bf5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721413866852793774,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a30a862431730cad16e447d55a4fb821,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65cdebd7fead61555f3eb344ede3b87169b46a3d890899b185210a547160c51d,PodSandboxId:4473d4683f2e13119e84131069ee05447abc7ffc6505f1cadf4505b54f90ee42,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721413866792915678,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dffd3f3edf9c9be1c68e639689e4e06a,},Annotations:map[string]string{io.kubernetes.container.hash: 6829aeb5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a5f2b6fa-f921-4be5-81d6-87cba90ce62a name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:44:34 ha-269108 crio[3808]: time="2024-07-19 18:44:34.356999073Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e942426f-5049-44ae-b7d0-4e62b718d13e name=/runtime.v1.RuntimeService/Version
	Jul 19 18:44:34 ha-269108 crio[3808]: time="2024-07-19 18:44:34.357115515Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e942426f-5049-44ae-b7d0-4e62b718d13e name=/runtime.v1.RuntimeService/Version
	Jul 19 18:44:34 ha-269108 crio[3808]: time="2024-07-19 18:44:34.358500340Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bafb65ff-52a5-493a-888f-9ca3b5e245f5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 18:44:34 ha-269108 crio[3808]: time="2024-07-19 18:44:34.358979082Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721414674358955315,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bafb65ff-52a5-493a-888f-9ca3b5e245f5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 18:44:34 ha-269108 crio[3808]: time="2024-07-19 18:44:34.359670882Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7aa4fec1-5607-44bb-9f00-0e66b177cbc3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:44:34 ha-269108 crio[3808]: time="2024-07-19 18:44:34.359724875Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7aa4fec1-5607-44bb-9f00-0e66b177cbc3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:44:34 ha-269108 crio[3808]: time="2024-07-19 18:44:34.360127421Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e29d9d2d41500bc20420d9caf8ce086e9c6da73f9c9f0a8671a968e0affcc0bf,PodSandboxId:7eeab2b3bad0c4d6e7586d6ad9fc6ce0c0f0429282a496f7a48d87a36952ddcd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721414616272082189,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38aaccd6-4fb9-4744-b502-261ebeaf476a,},Annotations:map[string]string{io.kubernetes.container.hash: 8461d301,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34be3054e3f20da8fc4dd2a1b657f03a54ed610f94ec57efc2a5dfbdf2cde39a,PodSandboxId:62bcad4efd94bca1108500804ec41eef50ea88d4e6cff2bf652f3bedcacb9542,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721414579261633444,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d46eaeb953414d3529b8c9640aa3b7e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6b1a8807,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0f5d4e1754ebac6b06f74389d4db5e5220834e10c045e765af7c6ac0c93bba1,PodSandboxId:cdf045f94dd3e94cdb3b9fb8065fb71a192e22ae8efb808fa7bccc89dc453f26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721414575261965951,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 295bf86c7591ac85ae0841105f8684b3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:820fa6d57d467304530b5aedf8f113e6ebf1c0c817fc0b9a769715d2219876fa,PodSandboxId:d2257256b0b5512cda01326fca07300d928f2889d76d5b38a5288e06096bc230,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721414569530811069,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8qqtr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c0699d4b-e052-40cf-b41c-ab604d64531c,},Annotations:map[string]string{io.kubernetes.container.hash: 5c041215,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8f438a5b11de68af439c31a82cf577b8339e9ab4115915f8f84256b98d21cf4,PodSandboxId:7eeab2b3bad0c4d6e7586d6ad9fc6ce0c0f0429282a496f7a48d87a36952ddcd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721414568264983939,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38aaccd6-4fb9-4744-b502-261ebeaf476a,},Annotations:map[string]string{io.kubernetes.container.hash: 8461d301,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf6013946ad3567c6a2dbcd6b32d3f6d236172aa2cd19f42c8bcfc3114fbd74b,PodSandboxId:135591c805808ce4cb4fbddad10e4d6c528e072f94adb2e39f69c8b1e28dd9f3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721414551291706910,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcefaef2992cd2ee3a947d6974f9e123,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8510f25dcbe4b97ca586c997088a07719140330bfbc0ca10b6ba8aaac2ebe6e,PodSandboxId:b8bac15efc8d4e42b6aed98a9de123b2d10eb77147edb1ff521ac9cc5c252e8f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721414536407869616,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zxh8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1016c72c-a0e0-41fe-8f64-1ba03d7ce1c8,},Annotations:map[string]string{io.kubernetes.container.hash: 7619a7d5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00a3dee6a93924220b2eface2a95a672b43a05ce683a5fa3e15d4a6e0f88a921,PodSandboxId:33491bdbed43991c1f8e942263d87ddfb0ee4b6ff5153a42931df4ef94e12215,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721414536291456893,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4lhkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92211e52-9a67-4b01-a4af-0769596d7103,},Annotations:map[string]string{io.kubernetes.container.hash: a2713f56,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e513814a1603c4b7d88bf130bd6602615f991a041f8bcc423fbaab913b64822,PodSandboxId:0788c2a97f3671154fcab5a8e80fc4b554a92775c43e8543c4ec8330287f0649,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721414536384687335,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-frb69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd2dc6fa-ab54-44e1-b3b1-3e63af76140b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a98894c,io.kubernetes.container.ports: [{\"nam
e\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c99e057b54cd9e137c753fbec9eb0ea6306196975ca9c0f5e4f889daf40a59d,PodSandboxId:08674abf83e5900d70bd790f31552962dfaa5088eac2127184ba4a08460d5eb7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721414536205718187,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-269108,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: dffd3f3edf9c9be1c68e639689e4e06a,},Annotations:map[string]string{io.kubernetes.container.hash: 6829aeb5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81fb866ed689e0dc2eb5f19c1c566cdd0b72053376dc0145a4e981fbd079eb3c,PodSandboxId:cdf045f94dd3e94cdb3b9fb8065fb71a192e22ae8efb808fa7bccc89dc453f26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721414536144949582,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-269108,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 295bf86c7591ac85ae0841105f8684b3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02e7b50680c1dac42f2c682f7ea72ed5143e51f1ecff42c1b0cb04290d516627,PodSandboxId:dcc6afb97407c367e9d3233a79b7b5316b719fc4d53f58596b1a370d2da443ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721414536180254081,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: a30a862431730cad16e447d55a4fb821,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c9be2fa966f212fb25f9c4cabcd726723a1406f5951bbf8b0fc3b0ceb26df7,PodSandboxId:05561f6f664c936be919ff8787423728deeb0966892197b5ed8771b3f4f991ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721414536030037354,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkjns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f97d4b6b-eb80-46c3-9
513-ef6e9af5318e,},Annotations:map[string]string{io.kubernetes.container.hash: b6a228cf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c0f3100700323b35ca91f87077c5ef59192d9768b24ef7730bbb356439b23b,PodSandboxId:62bcad4efd94bca1108500804ec41eef50ea88d4e6cff2bf652f3bedcacb9542,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721414535988012130,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d46eaeb953414d3529b8c9640aa3b7e1,},Ann
otations:map[string]string{io.kubernetes.container.hash: 6b1a8807,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7246971f05346f2dc79fb64d87f9e45c4dbe98d10104f0ef433c848235f3b80,PodSandboxId:8ce5a6a0852e27942f0fc5b9171cce34e14a87ce419545c3842e9353ab6324b8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721414041026695986,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8qqtr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c0699d4b-e052-40cf-b41c-ab604d64531c,},Annot
ations:map[string]string{io.kubernetes.container.hash: 5c041215,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:160fdbf709fdfd0919bdc4cac02da0844e0b17c6e6272326ed4de2e8c824f16a,PodSandboxId:7a1d288c9d262c0228c5bc042b7dcf56aee7adfe173e152f2e29c3f98ad72afa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721413903448938005,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-frb69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd2dc6fa-ab54-44e1-b3b1-3e63af76140b,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a98894c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e3cdc51f7ed094ed5462f11f5cff75cb3d6d30639d4cb6e22c13a07f94a6433,PodSandboxId:3932d100e152d2ea42e6eb0df146808e82dc875697eb60499c0321a97ca99df2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721413903464969133,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zxh8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1016c72c-a0e0-41fe-8f64-1ba03d7ce1c8,},Annotations:map[string]string{io.kubernetes.container.hash: 7619a7d5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b0808457c5c75d11706b6ffd7814fd4e46226d94c503fb77c9aa9c3739be671,PodSandboxId:eb0e9822a0f4e6fa2ff72be57c5b6d30df9a5b7e0c7ee0c4cc4aae2a6faaf91a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721413891286212300,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4lhkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92211e52-9a67-4b01-a4af-0769596d7103,},Annotations:map[string]string{io.kubernetes.container.hash: a2713f56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f7a66979140fe738b08e527164c2369f41fd064fa704fc9fee868074b4a452,PodSandboxId:1f4d197a59cd5ff468ce92a597b19649b7d39296e988d499593484af548c9fb9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721413887206521167,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkjns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f97d4b6b-eb80-46c3-9513-ef6e9af5318e,},Annotations:map[string]string{io.kubernetes.container.hash: b6a228cf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00761e9db17ecfad96376a93ad36118da3b611230214d3f6b730aed8037c9f74,PodSandboxId:9a07a59b0e0e4715d5e80dcbe88bfacb960a6a0b4f96bd8f70828ab198e71bf5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721413866852793774,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a30a862431730cad16e447d55a4fb821,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65cdebd7fead61555f3eb344ede3b87169b46a3d890899b185210a547160c51d,PodSandboxId:4473d4683f2e13119e84131069ee05447abc7ffc6505f1cadf4505b54f90ee42,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721413866792915678,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dffd3f3edf9c9be1c68e639689e4e06a,},Annotations:map[string]string{io.kubernetes.container.hash: 6829aeb5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7aa4fec1-5607-44bb-9f00-0e66b177cbc3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:44:34 ha-269108 crio[3808]: time="2024-07-19 18:44:34.413417268Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a8ba5854-6e6a-4ef8-a153-db070479d642 name=/runtime.v1.RuntimeService/Version
	Jul 19 18:44:34 ha-269108 crio[3808]: time="2024-07-19 18:44:34.413495240Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a8ba5854-6e6a-4ef8-a153-db070479d642 name=/runtime.v1.RuntimeService/Version
	Jul 19 18:44:34 ha-269108 crio[3808]: time="2024-07-19 18:44:34.414771324Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=637924fe-c440-4673-927d-8b463a8508c3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 18:44:34 ha-269108 crio[3808]: time="2024-07-19 18:44:34.415381232Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721414674415352728,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=637924fe-c440-4673-927d-8b463a8508c3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 18:44:34 ha-269108 crio[3808]: time="2024-07-19 18:44:34.416266720Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4db754b1-650c-4057-855a-a0886a415f99 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:44:34 ha-269108 crio[3808]: time="2024-07-19 18:44:34.416348872Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4db754b1-650c-4057-855a-a0886a415f99 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:44:34 ha-269108 crio[3808]: time="2024-07-19 18:44:34.416872304Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e29d9d2d41500bc20420d9caf8ce086e9c6da73f9c9f0a8671a968e0affcc0bf,PodSandboxId:7eeab2b3bad0c4d6e7586d6ad9fc6ce0c0f0429282a496f7a48d87a36952ddcd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721414616272082189,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38aaccd6-4fb9-4744-b502-261ebeaf476a,},Annotations:map[string]string{io.kubernetes.container.hash: 8461d301,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34be3054e3f20da8fc4dd2a1b657f03a54ed610f94ec57efc2a5dfbdf2cde39a,PodSandboxId:62bcad4efd94bca1108500804ec41eef50ea88d4e6cff2bf652f3bedcacb9542,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721414579261633444,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d46eaeb953414d3529b8c9640aa3b7e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6b1a8807,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0f5d4e1754ebac6b06f74389d4db5e5220834e10c045e765af7c6ac0c93bba1,PodSandboxId:cdf045f94dd3e94cdb3b9fb8065fb71a192e22ae8efb808fa7bccc89dc453f26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721414575261965951,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 295bf86c7591ac85ae0841105f8684b3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:820fa6d57d467304530b5aedf8f113e6ebf1c0c817fc0b9a769715d2219876fa,PodSandboxId:d2257256b0b5512cda01326fca07300d928f2889d76d5b38a5288e06096bc230,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721414569530811069,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8qqtr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c0699d4b-e052-40cf-b41c-ab604d64531c,},Annotations:map[string]string{io.kubernetes.container.hash: 5c041215,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8f438a5b11de68af439c31a82cf577b8339e9ab4115915f8f84256b98d21cf4,PodSandboxId:7eeab2b3bad0c4d6e7586d6ad9fc6ce0c0f0429282a496f7a48d87a36952ddcd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721414568264983939,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38aaccd6-4fb9-4744-b502-261ebeaf476a,},Annotations:map[string]string{io.kubernetes.container.hash: 8461d301,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf6013946ad3567c6a2dbcd6b32d3f6d236172aa2cd19f42c8bcfc3114fbd74b,PodSandboxId:135591c805808ce4cb4fbddad10e4d6c528e072f94adb2e39f69c8b1e28dd9f3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721414551291706910,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcefaef2992cd2ee3a947d6974f9e123,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8510f25dcbe4b97ca586c997088a07719140330bfbc0ca10b6ba8aaac2ebe6e,PodSandboxId:b8bac15efc8d4e42b6aed98a9de123b2d10eb77147edb1ff521ac9cc5c252e8f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721414536407869616,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zxh8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1016c72c-a0e0-41fe-8f64-1ba03d7ce1c8,},Annotations:map[string]string{io.kubernetes.container.hash: 7619a7d5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00a3dee6a93924220b2eface2a95a672b43a05ce683a5fa3e15d4a6e0f88a921,PodSandboxId:33491bdbed43991c1f8e942263d87ddfb0ee4b6ff5153a42931df4ef94e12215,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721414536291456893,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4lhkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92211e52-9a67-4b01-a4af-0769596d7103,},Annotations:map[string]string{io.kubernetes.container.hash: a2713f56,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e513814a1603c4b7d88bf130bd6602615f991a041f8bcc423fbaab913b64822,PodSandboxId:0788c2a97f3671154fcab5a8e80fc4b554a92775c43e8543c4ec8330287f0649,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721414536384687335,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-frb69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd2dc6fa-ab54-44e1-b3b1-3e63af76140b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a98894c,io.kubernetes.container.ports: [{\"nam
e\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c99e057b54cd9e137c753fbec9eb0ea6306196975ca9c0f5e4f889daf40a59d,PodSandboxId:08674abf83e5900d70bd790f31552962dfaa5088eac2127184ba4a08460d5eb7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721414536205718187,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-269108,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: dffd3f3edf9c9be1c68e639689e4e06a,},Annotations:map[string]string{io.kubernetes.container.hash: 6829aeb5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81fb866ed689e0dc2eb5f19c1c566cdd0b72053376dc0145a4e981fbd079eb3c,PodSandboxId:cdf045f94dd3e94cdb3b9fb8065fb71a192e22ae8efb808fa7bccc89dc453f26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721414536144949582,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-269108,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 295bf86c7591ac85ae0841105f8684b3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02e7b50680c1dac42f2c682f7ea72ed5143e51f1ecff42c1b0cb04290d516627,PodSandboxId:dcc6afb97407c367e9d3233a79b7b5316b719fc4d53f58596b1a370d2da443ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721414536180254081,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: a30a862431730cad16e447d55a4fb821,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c9be2fa966f212fb25f9c4cabcd726723a1406f5951bbf8b0fc3b0ceb26df7,PodSandboxId:05561f6f664c936be919ff8787423728deeb0966892197b5ed8771b3f4f991ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721414536030037354,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkjns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f97d4b6b-eb80-46c3-9
513-ef6e9af5318e,},Annotations:map[string]string{io.kubernetes.container.hash: b6a228cf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c0f3100700323b35ca91f87077c5ef59192d9768b24ef7730bbb356439b23b,PodSandboxId:62bcad4efd94bca1108500804ec41eef50ea88d4e6cff2bf652f3bedcacb9542,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721414535988012130,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d46eaeb953414d3529b8c9640aa3b7e1,},Ann
otations:map[string]string{io.kubernetes.container.hash: 6b1a8807,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7246971f05346f2dc79fb64d87f9e45c4dbe98d10104f0ef433c848235f3b80,PodSandboxId:8ce5a6a0852e27942f0fc5b9171cce34e14a87ce419545c3842e9353ab6324b8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721414041026695986,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8qqtr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c0699d4b-e052-40cf-b41c-ab604d64531c,},Annot
ations:map[string]string{io.kubernetes.container.hash: 5c041215,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:160fdbf709fdfd0919bdc4cac02da0844e0b17c6e6272326ed4de2e8c824f16a,PodSandboxId:7a1d288c9d262c0228c5bc042b7dcf56aee7adfe173e152f2e29c3f98ad72afa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721413903448938005,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-frb69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd2dc6fa-ab54-44e1-b3b1-3e63af76140b,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a98894c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e3cdc51f7ed094ed5462f11f5cff75cb3d6d30639d4cb6e22c13a07f94a6433,PodSandboxId:3932d100e152d2ea42e6eb0df146808e82dc875697eb60499c0321a97ca99df2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721413903464969133,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zxh8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1016c72c-a0e0-41fe-8f64-1ba03d7ce1c8,},Annotations:map[string]string{io.kubernetes.container.hash: 7619a7d5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b0808457c5c75d11706b6ffd7814fd4e46226d94c503fb77c9aa9c3739be671,PodSandboxId:eb0e9822a0f4e6fa2ff72be57c5b6d30df9a5b7e0c7ee0c4cc4aae2a6faaf91a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721413891286212300,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4lhkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92211e52-9a67-4b01-a4af-0769596d7103,},Annotations:map[string]string{io.kubernetes.container.hash: a2713f56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f7a66979140fe738b08e527164c2369f41fd064fa704fc9fee868074b4a452,PodSandboxId:1f4d197a59cd5ff468ce92a597b19649b7d39296e988d499593484af548c9fb9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721413887206521167,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkjns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f97d4b6b-eb80-46c3-9513-ef6e9af5318e,},Annotations:map[string]string{io.kubernetes.container.hash: b6a228cf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00761e9db17ecfad96376a93ad36118da3b611230214d3f6b730aed8037c9f74,PodSandboxId:9a07a59b0e0e4715d5e80dcbe88bfacb960a6a0b4f96bd8f70828ab198e71bf5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721413866852793774,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a30a862431730cad16e447d55a4fb821,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65cdebd7fead61555f3eb344ede3b87169b46a3d890899b185210a547160c51d,PodSandboxId:4473d4683f2e13119e84131069ee05447abc7ffc6505f1cadf4505b54f90ee42,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721413866792915678,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dffd3f3edf9c9be1c68e639689e4e06a,},Annotations:map[string]string{io.kubernetes.container.hash: 6829aeb5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4db754b1-650c-4057-855a-a0886a415f99 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:44:34 ha-269108 crio[3808]: time="2024-07-19 18:44:34.460381241Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eeb98912-bbe3-4fd8-98ce-e4d6973949b3 name=/runtime.v1.RuntimeService/Version
	Jul 19 18:44:34 ha-269108 crio[3808]: time="2024-07-19 18:44:34.460654945Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eeb98912-bbe3-4fd8-98ce-e4d6973949b3 name=/runtime.v1.RuntimeService/Version
	Jul 19 18:44:34 ha-269108 crio[3808]: time="2024-07-19 18:44:34.462521800Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=80e84527-b2cc-4979-815d-4692d5737c57 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 18:44:34 ha-269108 crio[3808]: time="2024-07-19 18:44:34.463127775Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721414674463062748,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=80e84527-b2cc-4979-815d-4692d5737c57 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 18:44:34 ha-269108 crio[3808]: time="2024-07-19 18:44:34.463681581Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5064481e-fec5-4375-803b-ae3ddb95b169 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:44:34 ha-269108 crio[3808]: time="2024-07-19 18:44:34.463752271Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5064481e-fec5-4375-803b-ae3ddb95b169 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:44:34 ha-269108 crio[3808]: time="2024-07-19 18:44:34.464569617Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e29d9d2d41500bc20420d9caf8ce086e9c6da73f9c9f0a8671a968e0affcc0bf,PodSandboxId:7eeab2b3bad0c4d6e7586d6ad9fc6ce0c0f0429282a496f7a48d87a36952ddcd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721414616272082189,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38aaccd6-4fb9-4744-b502-261ebeaf476a,},Annotations:map[string]string{io.kubernetes.container.hash: 8461d301,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34be3054e3f20da8fc4dd2a1b657f03a54ed610f94ec57efc2a5dfbdf2cde39a,PodSandboxId:62bcad4efd94bca1108500804ec41eef50ea88d4e6cff2bf652f3bedcacb9542,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721414579261633444,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d46eaeb953414d3529b8c9640aa3b7e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6b1a8807,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0f5d4e1754ebac6b06f74389d4db5e5220834e10c045e765af7c6ac0c93bba1,PodSandboxId:cdf045f94dd3e94cdb3b9fb8065fb71a192e22ae8efb808fa7bccc89dc453f26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721414575261965951,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 295bf86c7591ac85ae0841105f8684b3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:820fa6d57d467304530b5aedf8f113e6ebf1c0c817fc0b9a769715d2219876fa,PodSandboxId:d2257256b0b5512cda01326fca07300d928f2889d76d5b38a5288e06096bc230,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721414569530811069,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8qqtr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c0699d4b-e052-40cf-b41c-ab604d64531c,},Annotations:map[string]string{io.kubernetes.container.hash: 5c041215,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8f438a5b11de68af439c31a82cf577b8339e9ab4115915f8f84256b98d21cf4,PodSandboxId:7eeab2b3bad0c4d6e7586d6ad9fc6ce0c0f0429282a496f7a48d87a36952ddcd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721414568264983939,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38aaccd6-4fb9-4744-b502-261ebeaf476a,},Annotations:map[string]string{io.kubernetes.container.hash: 8461d301,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf6013946ad3567c6a2dbcd6b32d3f6d236172aa2cd19f42c8bcfc3114fbd74b,PodSandboxId:135591c805808ce4cb4fbddad10e4d6c528e072f94adb2e39f69c8b1e28dd9f3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721414551291706910,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcefaef2992cd2ee3a947d6974f9e123,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8510f25dcbe4b97ca586c997088a07719140330bfbc0ca10b6ba8aaac2ebe6e,PodSandboxId:b8bac15efc8d4e42b6aed98a9de123b2d10eb77147edb1ff521ac9cc5c252e8f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721414536407869616,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zxh8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1016c72c-a0e0-41fe-8f64-1ba03d7ce1c8,},Annotations:map[string]string{io.kubernetes.container.hash: 7619a7d5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00a3dee6a93924220b2eface2a95a672b43a05ce683a5fa3e15d4a6e0f88a921,PodSandboxId:33491bdbed43991c1f8e942263d87ddfb0ee4b6ff5153a42931df4ef94e12215,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721414536291456893,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4lhkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92211e52-9a67-4b01-a4af-0769596d7103,},Annotations:map[string]string{io.kubernetes.container.hash: a2713f56,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e513814a1603c4b7d88bf130bd6602615f991a041f8bcc423fbaab913b64822,PodSandboxId:0788c2a97f3671154fcab5a8e80fc4b554a92775c43e8543c4ec8330287f0649,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721414536384687335,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-frb69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd2dc6fa-ab54-44e1-b3b1-3e63af76140b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a98894c,io.kubernetes.container.ports: [{\"nam
e\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c99e057b54cd9e137c753fbec9eb0ea6306196975ca9c0f5e4f889daf40a59d,PodSandboxId:08674abf83e5900d70bd790f31552962dfaa5088eac2127184ba4a08460d5eb7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721414536205718187,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-269108,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: dffd3f3edf9c9be1c68e639689e4e06a,},Annotations:map[string]string{io.kubernetes.container.hash: 6829aeb5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81fb866ed689e0dc2eb5f19c1c566cdd0b72053376dc0145a4e981fbd079eb3c,PodSandboxId:cdf045f94dd3e94cdb3b9fb8065fb71a192e22ae8efb808fa7bccc89dc453f26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721414536144949582,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-269108,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 295bf86c7591ac85ae0841105f8684b3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02e7b50680c1dac42f2c682f7ea72ed5143e51f1ecff42c1b0cb04290d516627,PodSandboxId:dcc6afb97407c367e9d3233a79b7b5316b719fc4d53f58596b1a370d2da443ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721414536180254081,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: a30a862431730cad16e447d55a4fb821,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c9be2fa966f212fb25f9c4cabcd726723a1406f5951bbf8b0fc3b0ceb26df7,PodSandboxId:05561f6f664c936be919ff8787423728deeb0966892197b5ed8771b3f4f991ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721414536030037354,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkjns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f97d4b6b-eb80-46c3-9
513-ef6e9af5318e,},Annotations:map[string]string{io.kubernetes.container.hash: b6a228cf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c0f3100700323b35ca91f87077c5ef59192d9768b24ef7730bbb356439b23b,PodSandboxId:62bcad4efd94bca1108500804ec41eef50ea88d4e6cff2bf652f3bedcacb9542,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721414535988012130,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d46eaeb953414d3529b8c9640aa3b7e1,},Ann
otations:map[string]string{io.kubernetes.container.hash: 6b1a8807,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7246971f05346f2dc79fb64d87f9e45c4dbe98d10104f0ef433c848235f3b80,PodSandboxId:8ce5a6a0852e27942f0fc5b9171cce34e14a87ce419545c3842e9353ab6324b8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721414041026695986,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8qqtr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c0699d4b-e052-40cf-b41c-ab604d64531c,},Annot
ations:map[string]string{io.kubernetes.container.hash: 5c041215,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:160fdbf709fdfd0919bdc4cac02da0844e0b17c6e6272326ed4de2e8c824f16a,PodSandboxId:7a1d288c9d262c0228c5bc042b7dcf56aee7adfe173e152f2e29c3f98ad72afa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721413903448938005,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-frb69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd2dc6fa-ab54-44e1-b3b1-3e63af76140b,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a98894c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e3cdc51f7ed094ed5462f11f5cff75cb3d6d30639d4cb6e22c13a07f94a6433,PodSandboxId:3932d100e152d2ea42e6eb0df146808e82dc875697eb60499c0321a97ca99df2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721413903464969133,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zxh8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1016c72c-a0e0-41fe-8f64-1ba03d7ce1c8,},Annotations:map[string]string{io.kubernetes.container.hash: 7619a7d5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b0808457c5c75d11706b6ffd7814fd4e46226d94c503fb77c9aa9c3739be671,PodSandboxId:eb0e9822a0f4e6fa2ff72be57c5b6d30df9a5b7e0c7ee0c4cc4aae2a6faaf91a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721413891286212300,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4lhkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92211e52-9a67-4b01-a4af-0769596d7103,},Annotations:map[string]string{io.kubernetes.container.hash: a2713f56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f7a66979140fe738b08e527164c2369f41fd064fa704fc9fee868074b4a452,PodSandboxId:1f4d197a59cd5ff468ce92a597b19649b7d39296e988d499593484af548c9fb9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721413887206521167,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkjns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f97d4b6b-eb80-46c3-9513-ef6e9af5318e,},Annotations:map[string]string{io.kubernetes.container.hash: b6a228cf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00761e9db17ecfad96376a93ad36118da3b611230214d3f6b730aed8037c9f74,PodSandboxId:9a07a59b0e0e4715d5e80dcbe88bfacb960a6a0b4f96bd8f70828ab198e71bf5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721413866852793774,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a30a862431730cad16e447d55a4fb821,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65cdebd7fead61555f3eb344ede3b87169b46a3d890899b185210a547160c51d,PodSandboxId:4473d4683f2e13119e84131069ee05447abc7ffc6505f1cadf4505b54f90ee42,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721413866792915678,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dffd3f3edf9c9be1c68e639689e4e06a,},Annotations:map[string]string{io.kubernetes.container.hash: 6829aeb5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5064481e-fec5-4375-803b-ae3ddb95b169 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e29d9d2d41500       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      58 seconds ago       Running             storage-provisioner       4                   7eeab2b3bad0c       storage-provisioner
	34be3054e3f20       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            3                   62bcad4efd94b       kube-apiserver-ha-269108
	a0f5d4e1754eb       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   2                   cdf045f94dd3e       kube-controller-manager-ha-269108
	820fa6d57d467       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   d2257256b0b55       busybox-fc5497c4f-8qqtr
	a8f438a5b11de       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       3                   7eeab2b3bad0c       storage-provisioner
	cf6013946ad35       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   135591c805808       kube-vip-ha-269108
	f8510f25dcbe4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   b8bac15efc8d4       coredns-7db6d8ff4d-zxh8h
	6e513814a1603       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   0788c2a97f367       coredns-7db6d8ff4d-frb69
	00a3dee6a9392       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      2 minutes ago        Running             kindnet-cni               1                   33491bdbed439       kindnet-4lhkr
	8c99e057b54cd       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   08674abf83e59       etcd-ha-269108
	02e7b50680c1d       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      2 minutes ago        Running             kube-scheduler            1                   dcc6afb97407c       kube-scheduler-ha-269108
	81fb866ed689e       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      2 minutes ago        Exited              kube-controller-manager   1                   cdf045f94dd3e       kube-controller-manager-ha-269108
	01c9be2fa966f       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      2 minutes ago        Running             kube-proxy                1                   05561f6f664c9       kube-proxy-pkjns
	76c0f31007003       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      2 minutes ago        Exited              kube-apiserver            2                   62bcad4efd94b       kube-apiserver-ha-269108
	f7246971f0534       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   8ce5a6a0852e2       busybox-fc5497c4f-8qqtr
	4e3cdc51f7ed0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      12 minutes ago       Exited              coredns                   0                   3932d100e152d       coredns-7db6d8ff4d-zxh8h
	160fdbf709fdf       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      12 minutes ago       Exited              coredns                   0                   7a1d288c9d262       coredns-7db6d8ff4d-frb69
	1b0808457c5c7       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    13 minutes ago       Exited              kindnet-cni               0                   eb0e9822a0f4e       kindnet-4lhkr
	61f7a66979140       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      13 minutes ago       Exited              kube-proxy                0                   1f4d197a59cd5       kube-proxy-pkjns
	00761e9db17ec       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      13 minutes ago       Exited              kube-scheduler            0                   9a07a59b0e0e4       kube-scheduler-ha-269108
	65cdebd7fead6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago       Exited              etcd                      0                   4473d4683f2e1       etcd-ha-269108
	
	
	==> coredns [160fdbf709fdfd0919bdc4cac02da0844e0b17c6e6272326ed4de2e8c824f16a] <==
	[INFO] 10.244.2.2:41729 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000195097s
	[INFO] 10.244.2.2:54527 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000110807s
	[INFO] 10.244.0.4:56953 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000073764s
	[INFO] 10.244.0.4:52941 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000231699s
	[INFO] 10.244.1.2:55601 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109961s
	[INFO] 10.244.1.2:35949 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100008s
	[INFO] 10.244.1.2:37921 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013057s
	[INFO] 10.244.2.2:46987 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000135522s
	[INFO] 10.244.2.2:49412 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111732s
	[INFO] 10.244.0.4:60902 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000080561s
	[INFO] 10.244.0.4:46371 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000144185s
	[INFO] 10.244.1.2:42114 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000113886s
	[INFO] 10.244.1.2:42251 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000112767s
	[INFO] 10.244.1.2:32914 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121034s
	[INFO] 10.244.2.2:44583 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109669s
	[INFO] 10.244.2.2:43221 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000085803s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1865&timeout=8m43s&timeoutSeconds=523&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1865&timeout=9m22s&timeoutSeconds=562&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [4e3cdc51f7ed094ed5462f11f5cff75cb3d6d30639d4cb6e22c13a07f94a6433] <==
	[INFO] 10.244.1.2:37958 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000120338s
	[INFO] 10.244.1.2:41663 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000057406s
	[INFO] 10.244.2.2:39895 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001764061s
	[INFO] 10.244.2.2:32797 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000140596s
	[INFO] 10.244.2.2:41374 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014218s
	[INFO] 10.244.2.2:38111 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001248331s
	[INFO] 10.244.2.2:40563 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000167068s
	[INFO] 10.244.2.2:35228 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000156257s
	[INFO] 10.244.0.4:60265 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000081089s
	[INFO] 10.244.0.4:46991 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004503s
	[INFO] 10.244.1.2:54120 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097136s
	[INFO] 10.244.2.2:36674 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166666s
	[INFO] 10.244.2.2:45715 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115611s
	[INFO] 10.244.0.4:49067 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123058s
	[INFO] 10.244.0.4:53779 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000117437s
	[INFO] 10.244.1.2:35193 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133641s
	[INFO] 10.244.2.2:48409 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000071901s
	[INFO] 10.244.2.2:34803 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000085512s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1865&timeout=7m40s&timeoutSeconds=460&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6e513814a1603c4b7d88bf130bd6602615f991a041f8bcc423fbaab913b64822] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:37486->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1033758847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Jul-2024 18:42:28.055) (total time: 10698ms):
	Trace[1033758847]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:37486->10.96.0.1:443: read: connection reset by peer 10697ms (18:42:38.753)
	Trace[1033758847]: [10.698408486s] [10.698408486s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:37486->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [f8510f25dcbe4b97ca586c997088a07719140330bfbc0ca10b6ba8aaac2ebe6e] <==
	Trace[2078807794]: [13.113177825s] [13.113177825s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:42372->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:42388->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[653282894]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Jul-2024 18:42:28.523) (total time: 12904ms):
	Trace[653282894]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:42388->10.96.0.1:443: read: connection reset by peer 12904ms (18:42:41.428)
	Trace[653282894]: [12.904551114s] [12.904551114s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:42388->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-269108
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-269108
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221
	                    minikube.k8s.io/name=ha-269108
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T18_31_14_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 18:31:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-269108
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 18:44:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 18:43:02 +0000   Fri, 19 Jul 2024 18:31:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 18:43:02 +0000   Fri, 19 Jul 2024 18:31:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 18:43:02 +0000   Fri, 19 Jul 2024 18:31:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 18:43:02 +0000   Fri, 19 Jul 2024 18:31:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.163
	  Hostname:    ha-269108
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e79eb460340347cf89b27475ae92f17a
	  System UUID:                e79eb460-3403-47cf-89b2-7475ae92f17a
	  Boot ID:                    54c61684-08fe-4593-a142-b4110f9ebe97
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8qqtr              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-7db6d8ff4d-frb69             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-zxh8h             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-269108                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-4lhkr                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-269108             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-269108    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-pkjns                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-269108             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-269108                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 13m    kube-proxy       
	  Normal   Starting                 92s    kube-proxy       
	  Normal   NodeHasNoDiskPressure    13m    kubelet          Node ha-269108 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m    kubelet          Node ha-269108 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m    kubelet          Node ha-269108 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m    node-controller  Node ha-269108 event: Registered Node ha-269108 in Controller
	  Normal   NodeReady                12m    kubelet          Node ha-269108 status is now: NodeReady
	  Normal   RegisteredNode           12m    node-controller  Node ha-269108 event: Registered Node ha-269108 in Controller
	  Normal   RegisteredNode           10m    node-controller  Node ha-269108 event: Registered Node ha-269108 in Controller
	  Warning  ContainerGCFailed        3m21s  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           84s    node-controller  Node ha-269108 event: Registered Node ha-269108 in Controller
	  Normal   RegisteredNode           80s    node-controller  Node ha-269108 event: Registered Node ha-269108 in Controller
	  Normal   RegisteredNode           29s    node-controller  Node ha-269108 event: Registered Node ha-269108 in Controller
	
	
	Name:               ha-269108-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-269108-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221
	                    minikube.k8s.io/name=ha-269108
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T18_32_16_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 18:32:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-269108-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 18:44:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 18:43:43 +0000   Fri, 19 Jul 2024 18:43:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 18:43:43 +0000   Fri, 19 Jul 2024 18:43:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 18:43:43 +0000   Fri, 19 Jul 2024 18:43:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 18:43:43 +0000   Fri, 19 Jul 2024 18:43:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.87
	  Hostname:    ha-269108-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bd8856bcc7e74f5db8563df427c95529
	  System UUID:                bd8856bc-c7e7-4f5d-b856-3df427c95529
	  Boot ID:                    6d32c45e-6958-4023-8d00-acf926738400
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hds6j                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-269108-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-fs5wk                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-269108-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-269108-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-vr7bz                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-269108-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-269108-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 72s                kube-proxy       
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node ha-269108-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node ha-269108-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node ha-269108-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                node-controller  Node ha-269108-m02 event: Registered Node ha-269108-m02 in Controller
	  Normal  RegisteredNode           12m                node-controller  Node ha-269108-m02 event: Registered Node ha-269108-m02 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-269108-m02 event: Registered Node ha-269108-m02 in Controller
	  Normal  NodeNotReady             8m47s              node-controller  Node ha-269108-m02 status is now: NodeNotReady
	  Normal  Starting                 2m                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m (x8 over 2m)    kubelet          Node ha-269108-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m (x8 over 2m)    kubelet          Node ha-269108-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m (x7 over 2m)    kubelet          Node ha-269108-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           84s                node-controller  Node ha-269108-m02 event: Registered Node ha-269108-m02 in Controller
	  Normal  RegisteredNode           80s                node-controller  Node ha-269108-m02 event: Registered Node ha-269108-m02 in Controller
	  Normal  RegisteredNode           29s                node-controller  Node ha-269108-m02 event: Registered Node ha-269108-m02 in Controller
	
	
	Name:               ha-269108-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-269108-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221
	                    minikube.k8s.io/name=ha-269108
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T18_33_32_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 18:33:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-269108-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 18:44:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 18:44:11 +0000   Fri, 19 Jul 2024 18:33:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 18:44:11 +0000   Fri, 19 Jul 2024 18:33:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 18:44:11 +0000   Fri, 19 Jul 2024 18:33:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 18:44:11 +0000   Fri, 19 Jul 2024 18:33:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.111
	  Hostname:    ha-269108-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c6e00e6f38a04bfdb40d7ea3e2c16e0c
	  System UUID:                c6e00e6f-38a0-4bfd-b40d-7ea3e2c16e0c
	  Boot ID:                    4b4f3f48-11b4-4295-82e4-e6a52674f235
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-lrphf                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-269108-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-bmcvm                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-269108-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-269108-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-9zpp2                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-269108-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-269108-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 30s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-269108-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-269108-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-269108-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-269108-m03 event: Registered Node ha-269108-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-269108-m03 event: Registered Node ha-269108-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-269108-m03 event: Registered Node ha-269108-m03 in Controller
	  Normal   RegisteredNode           84s                node-controller  Node ha-269108-m03 event: Registered Node ha-269108-m03 in Controller
	  Normal   RegisteredNode           80s                node-controller  Node ha-269108-m03 event: Registered Node ha-269108-m03 in Controller
	  Normal   Starting                 53s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  53s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  53s                kubelet          Node ha-269108-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    53s                kubelet          Node ha-269108-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     53s                kubelet          Node ha-269108-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 53s                kubelet          Node ha-269108-m03 has been rebooted, boot id: 4b4f3f48-11b4-4295-82e4-e6a52674f235
	  Normal   RegisteredNode           29s                node-controller  Node ha-269108-m03 event: Registered Node ha-269108-m03 in Controller
	
	
	Name:               ha-269108-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-269108-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221
	                    minikube.k8s.io/name=ha-269108
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T18_34_34_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 18:34:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-269108-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 18:44:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 18:44:26 +0000   Fri, 19 Jul 2024 18:44:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 18:44:26 +0000   Fri, 19 Jul 2024 18:44:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 18:44:26 +0000   Fri, 19 Jul 2024 18:44:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 18:44:26 +0000   Fri, 19 Jul 2024 18:44:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.17
	  Hostname:    ha-269108-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 db189232948d49d08567fb37977226c0
	  System UUID:                db189232-948d-49d0-8567-fb37977226c0
	  Boot ID:                    1a39f02c-c82c-49de-b33a-f6daf6a96016
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2cplc       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-qfjxp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 9m54s              kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-269108-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-269108-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-269108-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-269108-m04 event: Registered Node ha-269108-m04 in Controller
	  Normal   RegisteredNode           9m59s              node-controller  Node ha-269108-m04 event: Registered Node ha-269108-m04 in Controller
	  Normal   RegisteredNode           9m58s              node-controller  Node ha-269108-m04 event: Registered Node ha-269108-m04 in Controller
	  Normal   NodeReady                9m41s              kubelet          Node ha-269108-m04 status is now: NodeReady
	  Normal   RegisteredNode           85s                node-controller  Node ha-269108-m04 event: Registered Node ha-269108-m04 in Controller
	  Normal   RegisteredNode           81s                node-controller  Node ha-269108-m04 event: Registered Node ha-269108-m04 in Controller
	  Normal   NodeNotReady             45s                node-controller  Node ha-269108-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           30s                node-controller  Node ha-269108-m04 event: Registered Node ha-269108-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9s (x2 over 9s)    kubelet          Node ha-269108-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x2 over 9s)    kubelet          Node ha-269108-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x2 over 9s)    kubelet          Node ha-269108-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 9s                 kubelet          Node ha-269108-m04 has been rebooted, boot id: 1a39f02c-c82c-49de-b33a-f6daf6a96016
	  Normal   NodeReady                9s                 kubelet          Node ha-269108-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000011] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +11.238779] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.059116] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053977] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.173547] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.128447] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.273873] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[Jul19 18:31] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +3.987459] systemd-fstab-generator[944]: Ignoring "noauto" option for root device
	[  +0.062161] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.240611] systemd-fstab-generator[1364]: Ignoring "noauto" option for root device
	[  +0.081692] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.045525] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.095803] kauditd_printk_skb: 36 callbacks suppressed
	[Jul19 18:32] kauditd_printk_skb: 24 callbacks suppressed
	[Jul19 18:42] systemd-fstab-generator[3705]: Ignoring "noauto" option for root device
	[  +0.156997] systemd-fstab-generator[3717]: Ignoring "noauto" option for root device
	[  +0.187872] systemd-fstab-generator[3754]: Ignoring "noauto" option for root device
	[  +0.147767] systemd-fstab-generator[3766]: Ignoring "noauto" option for root device
	[  +0.292465] systemd-fstab-generator[3794]: Ignoring "noauto" option for root device
	[  +0.753848] systemd-fstab-generator[3905]: Ignoring "noauto" option for root device
	[  +6.178344] kauditd_printk_skb: 222 callbacks suppressed
	[ +33.928680] kauditd_printk_skb: 5 callbacks suppressed
	[Jul19 18:43] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [65cdebd7fead61555f3eb344ede3b87169b46a3d890899b185210a547160c51d] <==
	{"level":"info","ts":"2024-07-19T18:40:41.858632Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a8a86752a40bcef4 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-19T18:40:41.858751Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a8a86752a40bcef4 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-19T18:40:41.858798Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a8a86752a40bcef4 received MsgPreVoteResp from a8a86752a40bcef4 at term 2"}
	{"level":"info","ts":"2024-07-19T18:40:41.858832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a8a86752a40bcef4 [logterm: 2, index: 2203] sent MsgPreVote request to 17e1ba8abf8c6143 at term 2"}
	{"level":"info","ts":"2024-07-19T18:40:41.858858Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a8a86752a40bcef4 [logterm: 2, index: 2203] sent MsgPreVote request to 74443fbe11e84754 at term 2"}
	{"level":"warn","ts":"2024-07-19T18:40:42.069524Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.163:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T18:40:42.069634Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.163:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-19T18:40:42.071243Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"a8a86752a40bcef4","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-19T18:40:42.071444Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"17e1ba8abf8c6143"}
	{"level":"info","ts":"2024-07-19T18:40:42.071472Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"17e1ba8abf8c6143"}
	{"level":"info","ts":"2024-07-19T18:40:42.071501Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"17e1ba8abf8c6143"}
	{"level":"info","ts":"2024-07-19T18:40:42.071596Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143"}
	{"level":"info","ts":"2024-07-19T18:40:42.071643Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143"}
	{"level":"info","ts":"2024-07-19T18:40:42.071689Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143"}
	{"level":"info","ts":"2024-07-19T18:40:42.071713Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"17e1ba8abf8c6143"}
	{"level":"info","ts":"2024-07-19T18:40:42.07172Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"74443fbe11e84754"}
	{"level":"info","ts":"2024-07-19T18:40:42.071729Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"74443fbe11e84754"}
	{"level":"info","ts":"2024-07-19T18:40:42.071757Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"74443fbe11e84754"}
	{"level":"info","ts":"2024-07-19T18:40:42.07182Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a8a86752a40bcef4","remote-peer-id":"74443fbe11e84754"}
	{"level":"info","ts":"2024-07-19T18:40:42.071855Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a8a86752a40bcef4","remote-peer-id":"74443fbe11e84754"}
	{"level":"info","ts":"2024-07-19T18:40:42.071893Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a8a86752a40bcef4","remote-peer-id":"74443fbe11e84754"}
	{"level":"info","ts":"2024-07-19T18:40:42.071905Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"74443fbe11e84754"}
	{"level":"info","ts":"2024-07-19T18:40:42.074475Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.163:2380"}
	{"level":"info","ts":"2024-07-19T18:40:42.074649Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.163:2380"}
	{"level":"info","ts":"2024-07-19T18:40:42.074683Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-269108","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.163:2380"],"advertise-client-urls":["https://192.168.39.163:2379"]}
	
	
	==> etcd [8c99e057b54cd9e137c753fbec9eb0ea6306196975ca9c0f5e4f889daf40a59d] <==
	{"level":"warn","ts":"2024-07-19T18:43:35.241579Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"74443fbe11e84754","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:43:35.341215Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"74443fbe11e84754","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:43:35.440939Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"74443fbe11e84754","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:43:35.541286Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"74443fbe11e84754","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:43:35.641292Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a8a86752a40bcef4","from":"a8a86752a40bcef4","remote-peer-id":"74443fbe11e84754","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T18:43:36.833431Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.111:2380/version","remote-member-id":"74443fbe11e84754","error":"Get \"https://192.168.39.111:2380/version\": dial tcp 192.168.39.111:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-19T18:43:36.833495Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"74443fbe11e84754","error":"Get \"https://192.168.39.111:2380/version\": dial tcp 192.168.39.111:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-19T18:43:37.018815Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"74443fbe11e84754","rtt":"0s","error":"dial tcp 192.168.39.111:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-19T18:43:37.018855Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"74443fbe11e84754","rtt":"0s","error":"dial tcp 192.168.39.111:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-19T18:43:40.834842Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.111:2380/version","remote-member-id":"74443fbe11e84754","error":"Get \"https://192.168.39.111:2380/version\": dial tcp 192.168.39.111:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-19T18:43:40.834959Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"74443fbe11e84754","error":"Get \"https://192.168.39.111:2380/version\": dial tcp 192.168.39.111:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-19T18:43:42.019553Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"74443fbe11e84754","rtt":"0s","error":"dial tcp 192.168.39.111:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-19T18:43:42.019692Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"74443fbe11e84754","rtt":"0s","error":"dial tcp 192.168.39.111:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-19T18:43:44.836994Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.111:2380/version","remote-member-id":"74443fbe11e84754","error":"Get \"https://192.168.39.111:2380/version\": dial tcp 192.168.39.111:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-19T18:43:44.837125Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"74443fbe11e84754","error":"Get \"https://192.168.39.111:2380/version\": dial tcp 192.168.39.111:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-19T18:43:46.100361Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"74443fbe11e84754"}
	{"level":"info","ts":"2024-07-19T18:43:46.10042Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a8a86752a40bcef4","remote-peer-id":"74443fbe11e84754"}
	{"level":"info","ts":"2024-07-19T18:43:46.10302Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"a8a86752a40bcef4","remote-peer-id":"74443fbe11e84754"}
	{"level":"info","ts":"2024-07-19T18:43:46.11711Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"a8a86752a40bcef4","to":"74443fbe11e84754","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-19T18:43:46.117275Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"a8a86752a40bcef4","remote-peer-id":"74443fbe11e84754"}
	{"level":"info","ts":"2024-07-19T18:43:46.124962Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"a8a86752a40bcef4","to":"74443fbe11e84754","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-19T18:43:46.124998Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"a8a86752a40bcef4","remote-peer-id":"74443fbe11e84754"}
	{"level":"warn","ts":"2024-07-19T18:43:47.020401Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"74443fbe11e84754","rtt":"0s","error":"dial tcp 192.168.39.111:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-19T18:43:47.020416Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"74443fbe11e84754","rtt":"0s","error":"dial tcp 192.168.39.111:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-19T18:43:50.959071Z","caller":"traceutil/trace.go:171","msg":"trace[1885181845] transaction","detail":"{read_only:false; response_revision:2285; number_of_response:1; }","duration":"146.644003ms","start":"2024-07-19T18:43:50.812403Z","end":"2024-07-19T18:43:50.959047Z","steps":["trace[1885181845] 'process raft request'  (duration: 146.53966ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:44:35 up 14 min,  0 users,  load average: 0.25, 0.43, 0.27
	Linux ha-269108 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [00a3dee6a93924220b2eface2a95a672b43a05ce683a5fa3e15d4a6e0f88a921] <==
	I0719 18:43:57.396266       1 main.go:322] Node ha-269108-m04 has CIDR [10.244.3.0/24] 
	I0719 18:44:07.394880       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0719 18:44:07.395023       1 main.go:322] Node ha-269108-m04 has CIDR [10.244.3.0/24] 
	I0719 18:44:07.395240       1 main.go:295] Handling node with IPs: map[192.168.39.163:{}]
	I0719 18:44:07.395317       1 main.go:299] handling current node
	I0719 18:44:07.395376       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0719 18:44:07.395396       1 main.go:322] Node ha-269108-m02 has CIDR [10.244.1.0/24] 
	I0719 18:44:07.395510       1 main.go:295] Handling node with IPs: map[192.168.39.111:{}]
	I0719 18:44:07.395517       1 main.go:322] Node ha-269108-m03 has CIDR [10.244.2.0/24] 
	I0719 18:44:17.394961       1 main.go:295] Handling node with IPs: map[192.168.39.163:{}]
	I0719 18:44:17.395055       1 main.go:299] handling current node
	I0719 18:44:17.395086       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0719 18:44:17.395092       1 main.go:322] Node ha-269108-m02 has CIDR [10.244.1.0/24] 
	I0719 18:44:17.395407       1 main.go:295] Handling node with IPs: map[192.168.39.111:{}]
	I0719 18:44:17.395441       1 main.go:322] Node ha-269108-m03 has CIDR [10.244.2.0/24] 
	I0719 18:44:17.395538       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0719 18:44:17.395566       1 main.go:322] Node ha-269108-m04 has CIDR [10.244.3.0/24] 
	I0719 18:44:27.395318       1 main.go:295] Handling node with IPs: map[192.168.39.163:{}]
	I0719 18:44:27.395460       1 main.go:299] handling current node
	I0719 18:44:27.395516       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0719 18:44:27.395551       1 main.go:322] Node ha-269108-m02 has CIDR [10.244.1.0/24] 
	I0719 18:44:27.395879       1 main.go:295] Handling node with IPs: map[192.168.39.111:{}]
	I0719 18:44:27.395943       1 main.go:322] Node ha-269108-m03 has CIDR [10.244.2.0/24] 
	I0719 18:44:27.396066       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0719 18:44:27.396106       1 main.go:322] Node ha-269108-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [1b0808457c5c75d11706b6ffd7814fd4e46226d94c503fb77c9aa9c3739be671] <==
	I0719 18:40:12.391414       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0719 18:40:12.391462       1 main.go:322] Node ha-269108-m04 has CIDR [10.244.3.0/24] 
	I0719 18:40:12.391607       1 main.go:295] Handling node with IPs: map[192.168.39.163:{}]
	I0719 18:40:12.391627       1 main.go:299] handling current node
	I0719 18:40:12.391638       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0719 18:40:12.391643       1 main.go:322] Node ha-269108-m02 has CIDR [10.244.1.0/24] 
	I0719 18:40:12.391708       1 main.go:295] Handling node with IPs: map[192.168.39.111:{}]
	I0719 18:40:12.391725       1 main.go:322] Node ha-269108-m03 has CIDR [10.244.2.0/24] 
	I0719 18:40:22.387212       1 main.go:295] Handling node with IPs: map[192.168.39.163:{}]
	I0719 18:40:22.387307       1 main.go:299] handling current node
	I0719 18:40:22.387342       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0719 18:40:22.387348       1 main.go:322] Node ha-269108-m02 has CIDR [10.244.1.0/24] 
	I0719 18:40:22.387570       1 main.go:295] Handling node with IPs: map[192.168.39.111:{}]
	I0719 18:40:22.387592       1 main.go:322] Node ha-269108-m03 has CIDR [10.244.2.0/24] 
	I0719 18:40:22.387641       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0719 18:40:22.387658       1 main.go:322] Node ha-269108-m04 has CIDR [10.244.3.0/24] 
	I0719 18:40:32.385780       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0719 18:40:32.385858       1 main.go:322] Node ha-269108-m02 has CIDR [10.244.1.0/24] 
	I0719 18:40:32.386103       1 main.go:295] Handling node with IPs: map[192.168.39.111:{}]
	I0719 18:40:32.386487       1 main.go:322] Node ha-269108-m03 has CIDR [10.244.2.0/24] 
	I0719 18:40:32.386640       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0719 18:40:32.386683       1 main.go:322] Node ha-269108-m04 has CIDR [10.244.3.0/24] 
	I0719 18:40:32.386779       1 main.go:295] Handling node with IPs: map[192.168.39.163:{}]
	I0719 18:40:32.386800       1 main.go:299] handling current node
	E0719 18:40:40.209465       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	
	
	==> kube-apiserver [34be3054e3f20da8fc4dd2a1b657f03a54ed610f94ec57efc2a5dfbdf2cde39a] <==
	I0719 18:43:01.010623       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0719 18:43:01.010695       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0719 18:43:01.064743       1 shared_informer.go:320] Caches are synced for configmaps
	I0719 18:43:01.064879       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0719 18:43:01.065946       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0719 18:43:01.067232       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0719 18:43:01.068036       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0719 18:43:01.068124       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	W0719 18:43:01.076376       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.111 192.168.39.87]
	I0719 18:43:01.085215       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0719 18:43:01.113898       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0719 18:43:01.113986       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0719 18:43:01.114044       1 aggregator.go:165] initial CRD sync complete...
	I0719 18:43:01.114067       1 autoregister_controller.go:141] Starting autoregister controller
	I0719 18:43:01.114074       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0719 18:43:01.114079       1 cache.go:39] Caches are synced for autoregister controller
	I0719 18:43:01.114326       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0719 18:43:01.114357       1 policy_source.go:224] refreshing policies
	I0719 18:43:01.151893       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0719 18:43:01.179297       1 controller.go:615] quota admission added evaluator for: endpoints
	I0719 18:43:01.187112       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0719 18:43:01.190545       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0719 18:43:01.973712       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0719 18:43:02.310998       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.111 192.168.39.163 192.168.39.87]
	W0719 18:43:12.311121       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.163 192.168.39.87]
	
	
	==> kube-apiserver [76c0f3100700323b35ca91f87077c5ef59192d9768b24ef7730bbb356439b23b] <==
	I0719 18:42:16.603452       1 options.go:221] external host was not specified, using 192.168.39.163
	I0719 18:42:16.630551       1 server.go:148] Version: v1.30.3
	I0719 18:42:16.630626       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 18:42:17.726457       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0719 18:42:17.739236       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0719 18:42:17.743710       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0719 18:42:17.745545       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0719 18:42:17.745822       1 instance.go:299] Using reconciler: lease
	W0719 18:42:37.724257       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0719 18:42:37.724441       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0719 18:42:37.747080       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [81fb866ed689e0dc2eb5f19c1c566cdd0b72053376dc0145a4e981fbd079eb3c] <==
	I0719 18:42:17.414805       1 serving.go:380] Generated self-signed cert in-memory
	I0719 18:42:18.017645       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0719 18:42:18.017729       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 18:42:18.021039       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0719 18:42:18.021768       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0719 18:42:18.022049       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0719 18:42:18.022112       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0719 18:42:38.752550       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.163:8443/healthz\": dial tcp 192.168.39.163:8443: connect: connection refused"
	
	
	==> kube-controller-manager [a0f5d4e1754ebac6b06f74389d4db5e5220834e10c045e765af7c6ac0c93bba1] <==
	I0719 18:43:14.099311       1 shared_informer.go:320] Caches are synced for disruption
	I0719 18:43:14.102487       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0719 18:43:14.206225       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0719 18:43:14.221635       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0719 18:43:14.262608       1 shared_informer.go:320] Caches are synced for resource quota
	I0719 18:43:14.276693       1 shared_informer.go:320] Caches are synced for resource quota
	I0719 18:43:14.719234       1 shared_informer.go:320] Caches are synced for garbage collector
	I0719 18:43:14.736362       1 shared_informer.go:320] Caches are synced for garbage collector
	I0719 18:43:14.736468       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0719 18:43:19.705919       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.901µs"
	I0719 18:43:20.345773       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-dzl6d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-dzl6d\": the object has been modified; please apply your changes to the latest version and try again"
	I0719 18:43:20.346197       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"174fe22e-8e69-4270-be8d-360e58f87fc9", APIVersion:"v1", ResourceVersion:"241", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-dzl6d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-dzl6d": the object has been modified; please apply your changes to the latest version and try again
	I0719 18:43:20.367092       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="81.76161ms"
	I0719 18:43:20.367260       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="122.46µs"
	I0719 18:43:23.279507       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.654248ms"
	I0719 18:43:23.281342       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-dzl6d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-dzl6d\": the object has been modified; please apply your changes to the latest version and try again"
	I0719 18:43:23.281801       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"174fe22e-8e69-4270-be8d-360e58f87fc9", APIVersion:"v1", ResourceVersion:"241", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-dzl6d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-dzl6d": the object has been modified; please apply your changes to the latest version and try again
	I0719 18:43:23.282049       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="56.721µs"
	I0719 18:43:25.429069       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.425046ms"
	I0719 18:43:25.429211       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="87.71µs"
	I0719 18:43:42.286458       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.954869ms"
	I0719 18:43:42.287018       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="178.53µs"
	I0719 18:43:59.360900       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.197701ms"
	I0719 18:43:59.360992       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.936µs"
	I0719 18:44:26.767447       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-269108-m04"
	
	
	==> kube-proxy [01c9be2fa966f212fb25f9c4cabcd726723a1406f5951bbf8b0fc3b0ceb26df7] <==
	E0719 18:42:43.475842       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-269108\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0719 18:43:01.908744       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-269108\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0719 18:43:01.909067       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0719 18:43:01.944625       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 18:43:01.944696       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 18:43:01.944713       1 server_linux.go:165] "Using iptables Proxier"
	I0719 18:43:01.947060       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 18:43:01.947490       1 server.go:872] "Version info" version="v1.30.3"
	I0719 18:43:01.947548       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 18:43:01.949139       1 config.go:192] "Starting service config controller"
	I0719 18:43:01.949257       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 18:43:01.949314       1 config.go:101] "Starting endpoint slice config controller"
	I0719 18:43:01.949331       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 18:43:01.950013       1 config.go:319] "Starting node config controller"
	I0719 18:43:01.950050       1 shared_informer.go:313] Waiting for caches to sync for node config
	E0719 18:43:04.979704       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0719 18:43:04.979995       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 18:43:04.980251       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 18:43:04.979941       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-269108&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 18:43:04.980318       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-269108&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 18:43:04.980290       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 18:43:04.980431       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0719 18:43:06.249864       1 shared_informer.go:320] Caches are synced for service config
	I0719 18:43:06.449812       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 18:43:06.550347       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [61f7a66979140fe738b08e527164c2369f41fd064fa704fc9fee868074b4a452] <==
	E0719 18:39:39.157536       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-269108&resourceVersion=1795": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 18:39:42.227926       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1763": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 18:39:42.228233       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1763": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 18:39:42.228427       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-269108&resourceVersion=1795": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 18:39:42.228525       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-269108&resourceVersion=1795": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 18:39:42.229334       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1865": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 18:39:42.229567       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1865": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 18:39:48.374966       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1763": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 18:39:48.375355       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1763": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 18:39:48.375739       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-269108&resourceVersion=1795": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 18:39:48.375856       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-269108&resourceVersion=1795": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 18:39:48.375764       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1865": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 18:39:48.376052       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1865": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 18:39:57.589448       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1763": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 18:39:57.589535       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1763": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 18:40:00.661176       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1865": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 18:40:00.661234       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1865": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 18:40:00.661313       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-269108&resourceVersion=1795": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 18:40:00.661345       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-269108&resourceVersion=1795": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 18:40:16.020039       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1865": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 18:40:16.020140       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1865": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 18:40:22.164015       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1763": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 18:40:22.164078       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1763": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 18:40:25.236249       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-269108&resourceVersion=1795": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 18:40:25.236365       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-269108&resourceVersion=1795": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [00761e9db17ecfad96376a93ad36118da3b611230214d3f6b730aed8037c9f74] <==
	E0719 18:40:38.987640       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 18:40:39.413817       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0719 18:40:39.414018       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0719 18:40:39.745014       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0719 18:40:39.745059       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0719 18:40:40.493569       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 18:40:40.493610       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0719 18:40:40.668335       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0719 18:40:40.668389       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0719 18:40:40.731543       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0719 18:40:40.731638       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0719 18:40:40.788134       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 18:40:40.788217       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0719 18:40:40.990528       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 18:40:40.990582       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0719 18:40:41.023983       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 18:40:41.024114       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0719 18:40:41.317914       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 18:40:41.317944       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 18:40:41.649122       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 18:40:41.649238       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0719 18:40:41.764072       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0719 18:40:41.764543       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0719 18:40:41.764972       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0719 18:40:41.770244       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [02e7b50680c1dac42f2c682f7ea72ed5143e51f1ecff42c1b0cb04290d516627] <==
	W0719 18:42:55.734479       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.163:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	E0719 18:42:55.734522       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.163:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	W0719 18:42:55.808696       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.163:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	E0719 18:42:55.808825       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.163:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	W0719 18:42:55.837425       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.163:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	E0719 18:42:55.837520       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.163:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	W0719 18:42:56.122829       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.163:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	E0719 18:42:56.122968       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.163:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	W0719 18:42:56.252314       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.163:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	E0719 18:42:56.252469       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.163:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	W0719 18:42:56.292768       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.163:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	E0719 18:42:56.292828       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.163:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	W0719 18:42:56.369807       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.163:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	E0719 18:42:56.369858       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.163:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	W0719 18:42:56.437502       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.163:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	E0719 18:42:56.437555       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.163:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	W0719 18:42:56.632955       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.163:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	E0719 18:42:56.633071       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.163:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	W0719 18:42:57.506903       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.163:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	E0719 18:42:57.507030       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.163:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	W0719 18:42:59.315756       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.163:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	E0719 18:42:59.315872       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.163:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	W0719 18:42:59.377818       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.163:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	E0719 18:42:59.377883       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.163:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	I0719 18:43:13.559853       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 19 18:43:01 ha-269108 kubelet[1371]: I0719 18:43:01.907666    1371 status_manager.go:853] "Failed to get status for pod" podUID="fcefaef2992cd2ee3a947d6974f9e123" pod="kube-system/kube-vip-ha-269108" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-vip-ha-269108\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 19 18:43:01 ha-269108 kubelet[1371]: W0719 18:43:01.907733    1371 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?resourceVersion=1802": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 19 18:43:01 ha-269108 kubelet[1371]: E0719 18:43:01.907911    1371 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?resourceVersion=1802": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 19 18:43:01 ha-269108 kubelet[1371]: W0719 18:43:01.908037    1371 reflector.go:547] pkg/kubelet/config/apiserver.go:66: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)ha-269108&resourceVersion=1807": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 19 18:43:01 ha-269108 kubelet[1371]: E0719 18:43:01.908119    1371 reflector.go:150] pkg/kubelet/config/apiserver.go:66: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)ha-269108&resourceVersion=1807": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 19 18:43:01 ha-269108 kubelet[1371]: W0719 18:43:01.908240    1371 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1805": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 19 18:43:01 ha-269108 kubelet[1371]: E0719 18:43:01.908312    1371 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1805": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 19 18:43:09 ha-269108 kubelet[1371]: I0719 18:43:09.251752    1371 scope.go:117] "RemoveContainer" containerID="a8f438a5b11de68af439c31a82cf577b8339e9ab4115915f8f84256b98d21cf4"
	Jul 19 18:43:09 ha-269108 kubelet[1371]: E0719 18:43:09.251980    1371 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(38aaccd6-4fb9-4744-b502-261ebeaf476a)\"" pod="kube-system/storage-provisioner" podUID="38aaccd6-4fb9-4744-b502-261ebeaf476a"
	Jul 19 18:43:13 ha-269108 kubelet[1371]: E0719 18:43:13.274897    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 18:43:13 ha-269108 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 18:43:13 ha-269108 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 18:43:13 ha-269108 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 18:43:13 ha-269108 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 18:43:13 ha-269108 kubelet[1371]: I0719 18:43:13.385518    1371 scope.go:117] "RemoveContainer" containerID="61e06264256fd9c46c5db2f17e1bfe6d9458fdb6b286b8b5a73b64244b61c6ea"
	Jul 19 18:43:24 ha-269108 kubelet[1371]: I0719 18:43:24.251601    1371 scope.go:117] "RemoveContainer" containerID="a8f438a5b11de68af439c31a82cf577b8339e9ab4115915f8f84256b98d21cf4"
	Jul 19 18:43:24 ha-269108 kubelet[1371]: E0719 18:43:24.251813    1371 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(38aaccd6-4fb9-4744-b502-261ebeaf476a)\"" pod="kube-system/storage-provisioner" podUID="38aaccd6-4fb9-4744-b502-261ebeaf476a"
	Jul 19 18:43:36 ha-269108 kubelet[1371]: I0719 18:43:36.252750    1371 scope.go:117] "RemoveContainer" containerID="a8f438a5b11de68af439c31a82cf577b8339e9ab4115915f8f84256b98d21cf4"
	Jul 19 18:43:36 ha-269108 kubelet[1371]: I0719 18:43:36.254101    1371 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-269108" podUID="a72f4143-401a-41a3-8adc-4cfc75369797"
	Jul 19 18:43:36 ha-269108 kubelet[1371]: I0719 18:43:36.286068    1371 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-269108"
	Jul 19 18:44:13 ha-269108 kubelet[1371]: E0719 18:44:13.275802    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 18:44:13 ha-269108 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 18:44:13 ha-269108 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 18:44:13 ha-269108 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 18:44:13 ha-269108 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 18:44:33.970322   40796 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19307-14841/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-269108 -n ha-269108
helpers_test.go:261: (dbg) Run:  kubectl --context ha-269108 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (356.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-269108 stop -v=7 --alsologtostderr: exit status 82 (2m0.45639989s)

                                                
                                                
-- stdout --
	* Stopping node "ha-269108-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 18:44:53.459890   41204 out.go:291] Setting OutFile to fd 1 ...
	I0719 18:44:53.460207   41204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:44:53.460222   41204 out.go:304] Setting ErrFile to fd 2...
	I0719 18:44:53.460231   41204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:44:53.460659   41204 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 18:44:53.461233   41204 out.go:298] Setting JSON to false
	I0719 18:44:53.461325   41204 mustload.go:65] Loading cluster: ha-269108
	I0719 18:44:53.461776   41204 config.go:182] Loaded profile config "ha-269108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:44:53.461890   41204 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/config.json ...
	I0719 18:44:53.462089   41204 mustload.go:65] Loading cluster: ha-269108
	I0719 18:44:53.462245   41204 config.go:182] Loaded profile config "ha-269108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:44:53.462271   41204 stop.go:39] StopHost: ha-269108-m04
	I0719 18:44:53.462758   41204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:44:53.462807   41204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:44:53.478056   41204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40353
	I0719 18:44:53.478586   41204 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:44:53.479220   41204 main.go:141] libmachine: Using API Version  1
	I0719 18:44:53.479242   41204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:44:53.479613   41204 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:44:53.482083   41204 out.go:177] * Stopping node "ha-269108-m04"  ...
	I0719 18:44:53.483327   41204 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0719 18:44:53.483367   41204 main.go:141] libmachine: (ha-269108-m04) Calling .DriverName
	I0719 18:44:53.483616   41204 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0719 18:44:53.483638   41204 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHHostname
	I0719 18:44:53.486385   41204 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:44:53.486829   41204 main.go:141] libmachine: (ha-269108-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:c4:15", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:44:21 +0000 UTC Type:0 Mac:52:54:00:44:c4:15 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-269108-m04 Clientid:01:52:54:00:44:c4:15}
	I0719 18:44:53.486856   41204 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:44:53.487008   41204 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHPort
	I0719 18:44:53.487161   41204 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHKeyPath
	I0719 18:44:53.487324   41204 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHUsername
	I0719 18:44:53.487487   41204 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m04/id_rsa Username:docker}
	I0719 18:44:53.570235   41204 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0719 18:44:53.622210   41204 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0719 18:44:53.674234   41204 main.go:141] libmachine: Stopping "ha-269108-m04"...
	I0719 18:44:53.674274   41204 main.go:141] libmachine: (ha-269108-m04) Calling .GetState
	I0719 18:44:53.675769   41204 main.go:141] libmachine: (ha-269108-m04) Calling .Stop
	I0719 18:44:53.679292   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 0/120
	I0719 18:44:54.680798   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 1/120
	I0719 18:44:55.682300   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 2/120
	I0719 18:44:56.683878   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 3/120
	I0719 18:44:57.685554   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 4/120
	I0719 18:44:58.687509   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 5/120
	I0719 18:44:59.689041   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 6/120
	I0719 18:45:00.690332   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 7/120
	I0719 18:45:01.691661   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 8/120
	I0719 18:45:02.693138   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 9/120
	I0719 18:45:03.695172   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 10/120
	I0719 18:45:04.696639   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 11/120
	I0719 18:45:05.698467   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 12/120
	I0719 18:45:06.699931   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 13/120
	I0719 18:45:07.701201   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 14/120
	I0719 18:45:08.703057   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 15/120
	I0719 18:45:09.704716   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 16/120
	I0719 18:45:10.706146   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 17/120
	I0719 18:45:11.707567   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 18/120
	I0719 18:45:12.709554   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 19/120
	I0719 18:45:13.711788   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 20/120
	I0719 18:45:14.713904   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 21/120
	I0719 18:45:15.715219   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 22/120
	I0719 18:45:16.716449   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 23/120
	I0719 18:45:17.717714   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 24/120
	I0719 18:45:18.720079   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 25/120
	I0719 18:45:19.721451   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 26/120
	I0719 18:45:20.722821   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 27/120
	I0719 18:45:21.724472   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 28/120
	I0719 18:45:22.725829   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 29/120
	I0719 18:45:23.727856   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 30/120
	I0719 18:45:24.729308   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 31/120
	I0719 18:45:25.730708   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 32/120
	I0719 18:45:26.732156   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 33/120
	I0719 18:45:27.733501   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 34/120
	I0719 18:45:28.735319   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 35/120
	I0719 18:45:29.736927   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 36/120
	I0719 18:45:30.738352   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 37/120
	I0719 18:45:31.739726   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 38/120
	I0719 18:45:32.740944   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 39/120
	I0719 18:45:33.743110   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 40/120
	I0719 18:45:34.745062   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 41/120
	I0719 18:45:35.746472   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 42/120
	I0719 18:45:36.747795   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 43/120
	I0719 18:45:37.749160   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 44/120
	I0719 18:45:38.751082   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 45/120
	I0719 18:45:39.752521   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 46/120
	I0719 18:45:40.753757   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 47/120
	I0719 18:45:41.755081   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 48/120
	I0719 18:45:42.756593   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 49/120
	I0719 18:45:43.758712   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 50/120
	I0719 18:45:44.760057   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 51/120
	I0719 18:45:45.761569   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 52/120
	I0719 18:45:46.762893   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 53/120
	I0719 18:45:47.764273   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 54/120
	I0719 18:45:48.766082   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 55/120
	I0719 18:45:49.767992   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 56/120
	I0719 18:45:50.769354   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 57/120
	I0719 18:45:51.770710   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 58/120
	I0719 18:45:52.772363   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 59/120
	I0719 18:45:53.774189   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 60/120
	I0719 18:45:54.775356   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 61/120
	I0719 18:45:55.776625   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 62/120
	I0719 18:45:56.778148   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 63/120
	I0719 18:45:57.779483   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 64/120
	I0719 18:45:58.781252   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 65/120
	I0719 18:45:59.782641   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 66/120
	I0719 18:46:00.783812   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 67/120
	I0719 18:46:01.785101   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 68/120
	I0719 18:46:02.786368   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 69/120
	I0719 18:46:03.788362   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 70/120
	I0719 18:46:04.790273   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 71/120
	I0719 18:46:05.791953   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 72/120
	I0719 18:46:06.794280   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 73/120
	I0719 18:46:07.796272   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 74/120
	I0719 18:46:08.798364   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 75/120
	I0719 18:46:09.799550   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 76/120
	I0719 18:46:10.800962   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 77/120
	I0719 18:46:11.802766   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 78/120
	I0719 18:46:12.803975   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 79/120
	I0719 18:46:13.806050   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 80/120
	I0719 18:46:14.807339   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 81/120
	I0719 18:46:15.808532   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 82/120
	I0719 18:46:16.810600   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 83/120
	I0719 18:46:17.811922   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 84/120
	I0719 18:46:18.813354   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 85/120
	I0719 18:46:19.814635   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 86/120
	I0719 18:46:20.816067   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 87/120
	I0719 18:46:21.817564   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 88/120
	I0719 18:46:22.819724   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 89/120
	I0719 18:46:23.822138   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 90/120
	I0719 18:46:24.823788   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 91/120
	I0719 18:46:25.825500   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 92/120
	I0719 18:46:26.826956   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 93/120
	I0719 18:46:27.828848   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 94/120
	I0719 18:46:28.830266   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 95/120
	I0719 18:46:29.831931   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 96/120
	I0719 18:46:30.833336   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 97/120
	I0719 18:46:31.834745   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 98/120
	I0719 18:46:32.836191   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 99/120
	I0719 18:46:33.838534   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 100/120
	I0719 18:46:34.839812   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 101/120
	I0719 18:46:35.841145   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 102/120
	I0719 18:46:36.842561   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 103/120
	I0719 18:46:37.844097   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 104/120
	I0719 18:46:38.845634   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 105/120
	I0719 18:46:39.847023   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 106/120
	I0719 18:46:40.848754   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 107/120
	I0719 18:46:41.850087   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 108/120
	I0719 18:46:42.851505   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 109/120
	I0719 18:46:43.853591   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 110/120
	I0719 18:46:44.854876   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 111/120
	I0719 18:46:45.856483   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 112/120
	I0719 18:46:46.858581   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 113/120
	I0719 18:46:47.859880   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 114/120
	I0719 18:46:48.861572   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 115/120
	I0719 18:46:49.863289   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 116/120
	I0719 18:46:50.864658   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 117/120
	I0719 18:46:51.866292   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 118/120
	I0719 18:46:52.867896   41204 main.go:141] libmachine: (ha-269108-m04) Waiting for machine to stop 119/120
	I0719 18:46:53.868395   41204 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0719 18:46:53.868442   41204 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0719 18:46:53.870441   41204 out.go:177] 
	W0719 18:46:53.871791   41204 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0719 18:46:53.871807   41204 out.go:239] * 
	* 
	W0719 18:46:53.873921   41204 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 18:46:53.875301   41204 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-269108 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-269108 status -v=7 --alsologtostderr: exit status 3 (18.895690275s)

                                                
                                                
-- stdout --
	ha-269108
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-269108-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-269108-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 18:46:53.919323   41635 out.go:291] Setting OutFile to fd 1 ...
	I0719 18:46:53.919573   41635 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:46:53.919583   41635 out.go:304] Setting ErrFile to fd 2...
	I0719 18:46:53.919587   41635 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:46:53.919759   41635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 18:46:53.919955   41635 out.go:298] Setting JSON to false
	I0719 18:46:53.919983   41635 mustload.go:65] Loading cluster: ha-269108
	I0719 18:46:53.920102   41635 notify.go:220] Checking for updates...
	I0719 18:46:53.920330   41635 config.go:182] Loaded profile config "ha-269108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:46:53.920347   41635 status.go:255] checking status of ha-269108 ...
	I0719 18:46:53.920724   41635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:46:53.920774   41635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:46:53.939005   41635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44853
	I0719 18:46:53.939392   41635 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:46:53.940006   41635 main.go:141] libmachine: Using API Version  1
	I0719 18:46:53.940037   41635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:46:53.940360   41635 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:46:53.940543   41635 main.go:141] libmachine: (ha-269108) Calling .GetState
	I0719 18:46:53.942263   41635 status.go:330] ha-269108 host status = "Running" (err=<nil>)
	I0719 18:46:53.942279   41635 host.go:66] Checking if "ha-269108" exists ...
	I0719 18:46:53.942554   41635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:46:53.942587   41635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:46:53.957603   41635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33105
	I0719 18:46:53.957961   41635 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:46:53.958487   41635 main.go:141] libmachine: Using API Version  1
	I0719 18:46:53.958512   41635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:46:53.958815   41635 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:46:53.959011   41635 main.go:141] libmachine: (ha-269108) Calling .GetIP
	I0719 18:46:53.961693   41635 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:46:53.962044   41635 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:46:53.962078   41635 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:46:53.962214   41635 host.go:66] Checking if "ha-269108" exists ...
	I0719 18:46:53.962539   41635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:46:53.962580   41635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:46:53.976582   41635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40873
	I0719 18:46:53.976970   41635 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:46:53.977425   41635 main.go:141] libmachine: Using API Version  1
	I0719 18:46:53.977444   41635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:46:53.977763   41635 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:46:53.977955   41635 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:46:53.978126   41635 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 18:46:53.978151   41635 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:46:53.980917   41635 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:46:53.981367   41635 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:46:53.981393   41635 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:46:53.981521   41635 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:46:53.981694   41635 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:46:53.981909   41635 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:46:53.982052   41635 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	I0719 18:46:54.064434   41635 ssh_runner.go:195] Run: systemctl --version
	I0719 18:46:54.071067   41635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 18:46:54.088511   41635 kubeconfig.go:125] found "ha-269108" server: "https://192.168.39.254:8443"
	I0719 18:46:54.088541   41635 api_server.go:166] Checking apiserver status ...
	I0719 18:46:54.088590   41635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 18:46:54.104573   41635 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5105/cgroup
	W0719 18:46:54.113819   41635 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5105/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 18:46:54.113879   41635 ssh_runner.go:195] Run: ls
	I0719 18:46:54.117945   41635 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 18:46:54.122907   41635 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 18:46:54.122939   41635 status.go:422] ha-269108 apiserver status = Running (err=<nil>)
	I0719 18:46:54.122948   41635 status.go:257] ha-269108 status: &{Name:ha-269108 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 18:46:54.122977   41635 status.go:255] checking status of ha-269108-m02 ...
	I0719 18:46:54.123262   41635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:46:54.123292   41635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:46:54.137642   41635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36439
	I0719 18:46:54.138040   41635 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:46:54.138477   41635 main.go:141] libmachine: Using API Version  1
	I0719 18:46:54.138497   41635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:46:54.138801   41635 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:46:54.138991   41635 main.go:141] libmachine: (ha-269108-m02) Calling .GetState
	I0719 18:46:54.140503   41635 status.go:330] ha-269108-m02 host status = "Running" (err=<nil>)
	I0719 18:46:54.140520   41635 host.go:66] Checking if "ha-269108-m02" exists ...
	I0719 18:46:54.140856   41635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:46:54.140892   41635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:46:54.156020   41635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36069
	I0719 18:46:54.156479   41635 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:46:54.156928   41635 main.go:141] libmachine: Using API Version  1
	I0719 18:46:54.156946   41635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:46:54.157297   41635 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:46:54.157468   41635 main.go:141] libmachine: (ha-269108-m02) Calling .GetIP
	I0719 18:46:54.160120   41635 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:46:54.160592   41635 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:42:25 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:46:54.160620   41635 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:46:54.160763   41635 host.go:66] Checking if "ha-269108-m02" exists ...
	I0719 18:46:54.161123   41635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:46:54.161159   41635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:46:54.176225   41635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44609
	I0719 18:46:54.176616   41635 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:46:54.177105   41635 main.go:141] libmachine: Using API Version  1
	I0719 18:46:54.177135   41635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:46:54.177458   41635 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:46:54.177652   41635 main.go:141] libmachine: (ha-269108-m02) Calling .DriverName
	I0719 18:46:54.177828   41635 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 18:46:54.177846   41635 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHHostname
	I0719 18:46:54.180526   41635 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:46:54.181092   41635 main.go:141] libmachine: (ha-269108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:a7:f0", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:42:25 +0000 UTC Type:0 Mac:52:54:00:6e:a7:f0 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-269108-m02 Clientid:01:52:54:00:6e:a7:f0}
	I0719 18:46:54.181122   41635 main.go:141] libmachine: (ha-269108-m02) DBG | domain ha-269108-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:6e:a7:f0 in network mk-ha-269108
	I0719 18:46:54.181347   41635 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHPort
	I0719 18:46:54.181519   41635 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHKeyPath
	I0719 18:46:54.181661   41635 main.go:141] libmachine: (ha-269108-m02) Calling .GetSSHUsername
	I0719 18:46:54.181776   41635 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m02/id_rsa Username:docker}
	I0719 18:46:54.265127   41635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 18:46:54.282521   41635 kubeconfig.go:125] found "ha-269108" server: "https://192.168.39.254:8443"
	I0719 18:46:54.282554   41635 api_server.go:166] Checking apiserver status ...
	I0719 18:46:54.282586   41635 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 18:46:54.296630   41635 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1421/cgroup
	W0719 18:46:54.305362   41635 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1421/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 18:46:54.305419   41635 ssh_runner.go:195] Run: ls
	I0719 18:46:54.309336   41635 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 18:46:54.313659   41635 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 18:46:54.313679   41635 status.go:422] ha-269108-m02 apiserver status = Running (err=<nil>)
	I0719 18:46:54.313687   41635 status.go:257] ha-269108-m02 status: &{Name:ha-269108-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 18:46:54.313709   41635 status.go:255] checking status of ha-269108-m04 ...
	I0719 18:46:54.314027   41635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:46:54.314070   41635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:46:54.328493   41635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41641
	I0719 18:46:54.328891   41635 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:46:54.329313   41635 main.go:141] libmachine: Using API Version  1
	I0719 18:46:54.329331   41635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:46:54.329632   41635 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:46:54.329802   41635 main.go:141] libmachine: (ha-269108-m04) Calling .GetState
	I0719 18:46:54.331259   41635 status.go:330] ha-269108-m04 host status = "Running" (err=<nil>)
	I0719 18:46:54.331278   41635 host.go:66] Checking if "ha-269108-m04" exists ...
	I0719 18:46:54.331544   41635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:46:54.331594   41635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:46:54.345994   41635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38133
	I0719 18:46:54.346407   41635 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:46:54.346866   41635 main.go:141] libmachine: Using API Version  1
	I0719 18:46:54.346886   41635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:46:54.347156   41635 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:46:54.347290   41635 main.go:141] libmachine: (ha-269108-m04) Calling .GetIP
	I0719 18:46:54.349715   41635 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:46:54.350108   41635 main.go:141] libmachine: (ha-269108-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:c4:15", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:44:21 +0000 UTC Type:0 Mac:52:54:00:44:c4:15 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-269108-m04 Clientid:01:52:54:00:44:c4:15}
	I0719 18:46:54.350127   41635 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:46:54.350246   41635 host.go:66] Checking if "ha-269108-m04" exists ...
	I0719 18:46:54.350526   41635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:46:54.350555   41635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:46:54.366158   41635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33747
	I0719 18:46:54.366715   41635 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:46:54.367208   41635 main.go:141] libmachine: Using API Version  1
	I0719 18:46:54.367231   41635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:46:54.367522   41635 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:46:54.367711   41635 main.go:141] libmachine: (ha-269108-m04) Calling .DriverName
	I0719 18:46:54.367887   41635 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 18:46:54.367908   41635 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHHostname
	I0719 18:46:54.370671   41635 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:46:54.371089   41635 main.go:141] libmachine: (ha-269108-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:c4:15", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:44:21 +0000 UTC Type:0 Mac:52:54:00:44:c4:15 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-269108-m04 Clientid:01:52:54:00:44:c4:15}
	I0719 18:46:54.371111   41635 main.go:141] libmachine: (ha-269108-m04) DBG | domain ha-269108-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:44:c4:15 in network mk-ha-269108
	I0719 18:46:54.371263   41635 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHPort
	I0719 18:46:54.371428   41635 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHKeyPath
	I0719 18:46:54.371579   41635 main.go:141] libmachine: (ha-269108-m04) Calling .GetSSHUsername
	I0719 18:46:54.371708   41635 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108-m04/id_rsa Username:docker}
	W0719 18:47:12.772033   41635 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.17:22: connect: no route to host
	W0719 18:47:12.772125   41635 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.17:22: connect: no route to host
	E0719 18:47:12.772146   41635 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.17:22: connect: no route to host
	I0719 18:47:12.772156   41635 status.go:257] ha-269108-m04 status: &{Name:ha-269108-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0719 18:47:12.772172   41635 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.17:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-269108 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-269108 -n ha-269108
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-269108 logs -n 25: (1.653347288s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-269108 ssh -n ha-269108-m02 sudo cat                                         | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | /home/docker/cp-test_ha-269108-m03_ha-269108-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-269108 cp ha-269108-m03:/home/docker/cp-test.txt                             | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m04:/home/docker/cp-test_ha-269108-m03_ha-269108-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n                                                                | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n ha-269108-m04 sudo cat                                         | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | /home/docker/cp-test_ha-269108-m03_ha-269108-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-269108 cp testdata/cp-test.txt                                               | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n                                                                | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-269108 cp ha-269108-m04:/home/docker/cp-test.txt                             | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile835163817/001/cp-test_ha-269108-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n                                                                | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-269108 cp ha-269108-m04:/home/docker/cp-test.txt                             | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108:/home/docker/cp-test_ha-269108-m04_ha-269108.txt                      |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n                                                                | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n ha-269108 sudo cat                                             | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | /home/docker/cp-test_ha-269108-m04_ha-269108.txt                                |           |         |         |                     |                     |
	| cp      | ha-269108 cp ha-269108-m04:/home/docker/cp-test.txt                             | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m02:/home/docker/cp-test_ha-269108-m04_ha-269108-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n                                                                | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n ha-269108-m02 sudo cat                                         | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | /home/docker/cp-test_ha-269108-m04_ha-269108-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-269108 cp ha-269108-m04:/home/docker/cp-test.txt                             | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m03:/home/docker/cp-test_ha-269108-m04_ha-269108-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n                                                                | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | ha-269108-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-269108 ssh -n ha-269108-m03 sudo cat                                         | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC | 19 Jul 24 18:35 UTC |
	|         | /home/docker/cp-test_ha-269108-m04_ha-269108-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-269108 node stop m02 -v=7                                                    | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:35 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-269108 node start m02 -v=7                                                   | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:37 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-269108 -v=7                                                          | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:38 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-269108 -v=7                                                               | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:38 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-269108 --wait=true -v=7                                                   | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:40 UTC | 19 Jul 24 18:44 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-269108                                                               | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:44 UTC |                     |
	| node    | ha-269108 node delete m03 -v=7                                                  | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:44 UTC | 19 Jul 24 18:44 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | ha-269108 stop -v=7                                                             | ha-269108 | jenkins | v1.33.1 | 19 Jul 24 18:44 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 18:40:40
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 18:40:40.971899   39480 out.go:291] Setting OutFile to fd 1 ...
	I0719 18:40:40.971996   39480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:40:40.972000   39480 out.go:304] Setting ErrFile to fd 2...
	I0719 18:40:40.972004   39480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:40:40.972196   39480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 18:40:40.972704   39480 out.go:298] Setting JSON to false
	I0719 18:40:40.973616   39480 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4984,"bootTime":1721409457,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 18:40:40.973667   39480 start.go:139] virtualization: kvm guest
	I0719 18:40:40.975799   39480 out.go:177] * [ha-269108] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 18:40:40.977180   39480 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 18:40:40.977180   39480 notify.go:220] Checking for updates...
	I0719 18:40:40.979721   39480 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 18:40:40.981038   39480 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 18:40:40.982360   39480 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 18:40:40.983556   39480 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 18:40:40.984660   39480 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 18:40:40.986125   39480 config.go:182] Loaded profile config "ha-269108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:40:40.986219   39480 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 18:40:40.986638   39480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:40:40.986678   39480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:40:41.001344   39480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36163
	I0719 18:40:41.001754   39480 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:40:41.002395   39480 main.go:141] libmachine: Using API Version  1
	I0719 18:40:41.002416   39480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:40:41.002728   39480 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:40:41.002927   39480 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:40:41.038177   39480 out.go:177] * Using the kvm2 driver based on existing profile
	I0719 18:40:41.039406   39480 start.go:297] selected driver: kvm2
	I0719 18:40:41.039430   39480 start.go:901] validating driver "kvm2" against &{Name:ha-269108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-269108 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.87 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.17 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 18:40:41.039607   39480 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 18:40:41.040057   39480 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 18:40:41.040147   39480 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19307-14841/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 18:40:41.055156   39480 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 18:40:41.056094   39480 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 18:40:41.056171   39480 cni.go:84] Creating CNI manager for ""
	I0719 18:40:41.056187   39480 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0719 18:40:41.056259   39480 start.go:340] cluster config:
	{Name:ha-269108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-269108 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.87 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.17 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 18:40:41.056429   39480 iso.go:125] acquiring lock: {Name:mka126a3710ab8a115eb2a3299b735524430e5f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 18:40:41.058350   39480 out.go:177] * Starting "ha-269108" primary control-plane node in "ha-269108" cluster
	I0719 18:40:41.059535   39480 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 18:40:41.059562   39480 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 18:40:41.059575   39480 cache.go:56] Caching tarball of preloaded images
	I0719 18:40:41.059653   39480 preload.go:172] Found /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 18:40:41.059663   39480 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 18:40:41.059770   39480 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/config.json ...
	I0719 18:40:41.059976   39480 start.go:360] acquireMachinesLock for ha-269108: {Name:mk45aeee801587237bb965fa260501e9fac6f233 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 18:40:41.060017   39480 start.go:364] duration metric: took 24.403µs to acquireMachinesLock for "ha-269108"
	I0719 18:40:41.060030   39480 start.go:96] Skipping create...Using existing machine configuration
	I0719 18:40:41.060035   39480 fix.go:54] fixHost starting: 
	I0719 18:40:41.060278   39480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:40:41.060313   39480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:40:41.073685   39480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34381
	I0719 18:40:41.074114   39480 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:40:41.074548   39480 main.go:141] libmachine: Using API Version  1
	I0719 18:40:41.074589   39480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:40:41.074904   39480 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:40:41.075076   39480 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:40:41.075219   39480 main.go:141] libmachine: (ha-269108) Calling .GetState
	I0719 18:40:41.076660   39480 fix.go:112] recreateIfNeeded on ha-269108: state=Running err=<nil>
	W0719 18:40:41.076676   39480 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 18:40:41.078553   39480 out.go:177] * Updating the running kvm2 "ha-269108" VM ...
	I0719 18:40:41.079604   39480 machine.go:94] provisionDockerMachine start ...
	I0719 18:40:41.079620   39480 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:40:41.079798   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:40:41.082055   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:40:41.082488   39480 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:40:41.082512   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:40:41.082645   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:40:41.082793   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:40:41.082954   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:40:41.083050   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:40:41.083191   39480 main.go:141] libmachine: Using SSH client type: native
	I0719 18:40:41.083381   39480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0719 18:40:41.083392   39480 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 18:40:41.184645   39480 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-269108
	
	I0719 18:40:41.184687   39480 main.go:141] libmachine: (ha-269108) Calling .GetMachineName
	I0719 18:40:41.184939   39480 buildroot.go:166] provisioning hostname "ha-269108"
	I0719 18:40:41.184962   39480 main.go:141] libmachine: (ha-269108) Calling .GetMachineName
	I0719 18:40:41.185135   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:40:41.187916   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:40:41.188453   39480 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:40:41.188483   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:40:41.188663   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:40:41.188869   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:40:41.189146   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:40:41.189290   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:40:41.189496   39480 main.go:141] libmachine: Using SSH client type: native
	I0719 18:40:41.189721   39480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0719 18:40:41.189737   39480 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-269108 && echo "ha-269108" | sudo tee /etc/hostname
	I0719 18:40:41.307980   39480 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-269108
	
	I0719 18:40:41.308006   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:40:41.310504   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:40:41.310879   39480 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:40:41.310907   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:40:41.311080   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:40:41.311266   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:40:41.311400   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:40:41.311505   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:40:41.311815   39480 main.go:141] libmachine: Using SSH client type: native
	I0719 18:40:41.311997   39480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0719 18:40:41.312014   39480 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-269108' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-269108/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-269108' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 18:40:41.412347   39480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 18:40:41.412379   39480 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 18:40:41.412412   39480 buildroot.go:174] setting up certificates
	I0719 18:40:41.412425   39480 provision.go:84] configureAuth start
	I0719 18:40:41.412438   39480 main.go:141] libmachine: (ha-269108) Calling .GetMachineName
	I0719 18:40:41.412695   39480 main.go:141] libmachine: (ha-269108) Calling .GetIP
	I0719 18:40:41.415205   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:40:41.415553   39480 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:40:41.415573   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:40:41.415947   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:40:41.418305   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:40:41.418614   39480 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:40:41.418646   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:40:41.418784   39480 provision.go:143] copyHostCerts
	I0719 18:40:41.418819   39480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 18:40:41.418861   39480 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 18:40:41.418870   39480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 18:40:41.418932   39480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 18:40:41.419025   39480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 18:40:41.419043   39480 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 18:40:41.419050   39480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 18:40:41.419074   39480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 18:40:41.419130   39480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 18:40:41.419158   39480 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 18:40:41.419166   39480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 18:40:41.419196   39480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 18:40:41.419264   39480 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.ha-269108 san=[127.0.0.1 192.168.39.163 ha-269108 localhost minikube]
	I0719 18:40:41.484308   39480 provision.go:177] copyRemoteCerts
	I0719 18:40:41.484374   39480 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 18:40:41.484397   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:40:41.486893   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:40:41.487165   39480 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:40:41.487188   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:40:41.487382   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:40:41.487588   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:40:41.487739   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:40:41.487878   39480 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	I0719 18:40:41.566797   39480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0719 18:40:41.566856   39480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 18:40:41.594675   39480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0719 18:40:41.594757   39480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 18:40:41.618700   39480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0719 18:40:41.618780   39480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0719 18:40:41.642104   39480 provision.go:87] duration metric: took 229.651846ms to configureAuth
	I0719 18:40:41.642132   39480 buildroot.go:189] setting minikube options for container-runtime
	I0719 18:40:41.642349   39480 config.go:182] Loaded profile config "ha-269108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:40:41.642429   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:40:41.645394   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:40:41.645746   39480 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:40:41.645778   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:40:41.645978   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:40:41.646175   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:40:41.646357   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:40:41.646532   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:40:41.646676   39480 main.go:141] libmachine: Using SSH client type: native
	I0719 18:40:41.646848   39480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0719 18:40:41.646862   39480 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 18:42:12.376387   39480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 18:42:12.376420   39480 machine.go:97] duration metric: took 1m31.296804013s to provisionDockerMachine
	I0719 18:42:12.376434   39480 start.go:293] postStartSetup for "ha-269108" (driver="kvm2")
	I0719 18:42:12.376447   39480 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 18:42:12.376469   39480 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:42:12.376843   39480 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 18:42:12.376870   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:42:12.379922   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:42:12.380324   39480 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:42:12.380351   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:42:12.380498   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:42:12.380695   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:42:12.380908   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:42:12.381163   39480 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	I0719 18:42:12.463686   39480 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 18:42:12.467642   39480 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 18:42:12.467667   39480 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 18:42:12.467745   39480 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 18:42:12.467884   39480 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 18:42:12.467898   39480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> /etc/ssl/certs/220282.pem
	I0719 18:42:12.468010   39480 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 18:42:12.477453   39480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 18:42:12.500252   39480 start.go:296] duration metric: took 123.795057ms for postStartSetup
	I0719 18:42:12.500297   39480 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:42:12.500606   39480 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0719 18:42:12.500641   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:42:12.503042   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:42:12.503403   39480 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:42:12.503429   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:42:12.503577   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:42:12.503746   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:42:12.504009   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:42:12.504165   39480 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	W0719 18:42:12.582005   39480 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0719 18:42:12.582033   39480 fix.go:56] duration metric: took 1m31.521997605s for fixHost
	I0719 18:42:12.582052   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:42:12.584714   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:42:12.585097   39480 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:42:12.585136   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:42:12.585323   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:42:12.585512   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:42:12.585666   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:42:12.585806   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:42:12.585974   39480 main.go:141] libmachine: Using SSH client type: native
	I0719 18:42:12.586129   39480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0719 18:42:12.586138   39480 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 18:42:12.695290   39480 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721414532.655233901
	
	I0719 18:42:12.695314   39480 fix.go:216] guest clock: 1721414532.655233901
	I0719 18:42:12.695326   39480 fix.go:229] Guest: 2024-07-19 18:42:12.655233901 +0000 UTC Remote: 2024-07-19 18:42:12.582040262 +0000 UTC m=+91.643952274 (delta=73.193639ms)
	I0719 18:42:12.695350   39480 fix.go:200] guest clock delta is within tolerance: 73.193639ms
	I0719 18:42:12.695356   39480 start.go:83] releasing machines lock for "ha-269108", held for 1m31.635330658s
	I0719 18:42:12.695382   39480 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:42:12.695645   39480 main.go:141] libmachine: (ha-269108) Calling .GetIP
	I0719 18:42:12.698257   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:42:12.698699   39480 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:42:12.698728   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:42:12.698903   39480 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:42:12.699441   39480 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:42:12.699625   39480 main.go:141] libmachine: (ha-269108) Calling .DriverName
	I0719 18:42:12.699726   39480 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 18:42:12.699772   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:42:12.699830   39480 ssh_runner.go:195] Run: cat /version.json
	I0719 18:42:12.699862   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHHostname
	I0719 18:42:12.702370   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:42:12.702652   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:42:12.702809   39480 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:42:12.702832   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:42:12.702983   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:42:12.702999   39480 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:42:12.703020   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:42:12.703148   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:42:12.703195   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHPort
	I0719 18:42:12.703297   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:42:12.703360   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHKeyPath
	I0719 18:42:12.703442   39480 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	I0719 18:42:12.703849   39480 main.go:141] libmachine: (ha-269108) Calling .GetSSHUsername
	I0719 18:42:12.703981   39480 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/ha-269108/id_rsa Username:docker}
	I0719 18:42:12.801355   39480 ssh_runner.go:195] Run: systemctl --version
	I0719 18:42:12.850880   39480 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 18:42:13.001635   39480 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 18:42:13.007797   39480 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 18:42:13.007880   39480 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 18:42:13.016557   39480 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0719 18:42:13.016580   39480 start.go:495] detecting cgroup driver to use...
	I0719 18:42:13.016633   39480 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 18:42:13.032290   39480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 18:42:13.045718   39480 docker.go:217] disabling cri-docker service (if available) ...
	I0719 18:42:13.045777   39480 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 18:42:13.058831   39480 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 18:42:13.071706   39480 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 18:42:13.234299   39480 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 18:42:13.383534   39480 docker.go:233] disabling docker service ...
	I0719 18:42:13.383606   39480 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 18:42:13.406654   39480 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 18:42:13.426184   39480 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 18:42:13.566047   39480 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 18:42:13.727243   39480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 18:42:13.742905   39480 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 18:42:13.760883   39480 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 18:42:13.760959   39480 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:42:13.770847   39480 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 18:42:13.770938   39480 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:42:13.780693   39480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:42:13.790613   39480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:42:13.800329   39480 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 18:42:13.810177   39480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:42:13.820509   39480 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:42:13.830482   39480 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 18:42:13.840826   39480 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 18:42:13.850061   39480 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 18:42:13.860051   39480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 18:42:14.007079   39480 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 18:42:14.286724   39480 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 18:42:14.286789   39480 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 18:42:14.291387   39480 start.go:563] Will wait 60s for crictl version
	I0719 18:42:14.291434   39480 ssh_runner.go:195] Run: which crictl
	I0719 18:42:14.295091   39480 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 18:42:14.335471   39480 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 18:42:14.335546   39480 ssh_runner.go:195] Run: crio --version
	I0719 18:42:14.364798   39480 ssh_runner.go:195] Run: crio --version
	I0719 18:42:14.396039   39480 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 18:42:14.397303   39480 main.go:141] libmachine: (ha-269108) Calling .GetIP
	I0719 18:42:14.399682   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:42:14.400120   39480 main.go:141] libmachine: (ha-269108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:2d:bd", ip: ""} in network mk-ha-269108: {Iface:virbr1 ExpiryTime:2024-07-19 19:30:44 +0000 UTC Type:0 Mac:52:54:00:0f:2d:bd Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-269108 Clientid:01:52:54:00:0f:2d:bd}
	I0719 18:42:14.400143   39480 main.go:141] libmachine: (ha-269108) DBG | domain ha-269108 has defined IP address 192.168.39.163 and MAC address 52:54:00:0f:2d:bd in network mk-ha-269108
	I0719 18:42:14.400320   39480 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 18:42:14.404823   39480 kubeadm.go:883] updating cluster {Name:ha-269108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-269108 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.87 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.17 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 18:42:14.404988   39480 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 18:42:14.405033   39480 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 18:42:14.447576   39480 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 18:42:14.447595   39480 crio.go:433] Images already preloaded, skipping extraction
	I0719 18:42:14.447636   39480 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 18:42:14.481040   39480 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 18:42:14.481061   39480 cache_images.go:84] Images are preloaded, skipping loading
	I0719 18:42:14.481070   39480 kubeadm.go:934] updating node { 192.168.39.163 8443 v1.30.3 crio true true} ...
	I0719 18:42:14.481170   39480 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-269108 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.163
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-269108 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 18:42:14.481242   39480 ssh_runner.go:195] Run: crio config
	I0719 18:42:14.525151   39480 cni.go:84] Creating CNI manager for ""
	I0719 18:42:14.525176   39480 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0719 18:42:14.525192   39480 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 18:42:14.525226   39480 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.163 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-269108 NodeName:ha-269108 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.163"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.163 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 18:42:14.525377   39480 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.163
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-269108"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.163
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.163"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 18:42:14.525403   39480 kube-vip.go:115] generating kube-vip config ...
	I0719 18:42:14.525537   39480 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0719 18:42:14.536439   39480 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0719 18:42:14.536574   39480 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0719 18:42:14.536630   39480 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 18:42:14.545704   39480 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 18:42:14.545781   39480 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0719 18:42:14.554294   39480 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0719 18:42:14.569627   39480 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 18:42:14.585194   39480 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0719 18:42:14.600863   39480 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0719 18:42:14.616531   39480 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0719 18:42:14.621343   39480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 18:42:14.766040   39480 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 18:42:14.781020   39480 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108 for IP: 192.168.39.163
	I0719 18:42:14.781045   39480 certs.go:194] generating shared ca certs ...
	I0719 18:42:14.781063   39480 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:42:14.781241   39480 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 18:42:14.781285   39480 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 18:42:14.781296   39480 certs.go:256] generating profile certs ...
	I0719 18:42:14.781380   39480 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/client.key
	I0719 18:42:14.781406   39480 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key.552be74c
	I0719 18:42:14.781423   39480 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt.552be74c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.163 192.168.39.87 192.168.39.111 192.168.39.254]
	I0719 18:42:14.993023   39480 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt.552be74c ...
	I0719 18:42:14.993058   39480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt.552be74c: {Name:mk647737b2511a6613bcf2466c374c86204ef3e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:42:14.993248   39480 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key.552be74c ...
	I0719 18:42:14.993265   39480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key.552be74c: {Name:mkc35ee48db630f062b5250f57cf2b3ada419b6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:42:14.993361   39480 certs.go:381] copying /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt.552be74c -> /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt
	I0719 18:42:14.993534   39480 certs.go:385] copying /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key.552be74c -> /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key
	I0719 18:42:14.993696   39480 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.key
	I0719 18:42:14.993714   39480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 18:42:14.993733   39480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0719 18:42:14.993747   39480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 18:42:14.993766   39480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 18:42:14.993781   39480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 18:42:14.993797   39480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 18:42:14.993821   39480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 18:42:14.993876   39480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 18:42:14.993942   39480 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 18:42:14.993984   39480 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 18:42:14.993997   39480 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 18:42:14.994029   39480 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 18:42:14.994072   39480 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 18:42:14.994106   39480 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 18:42:14.994158   39480 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 18:42:14.994193   39480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> /usr/share/ca-certificates/220282.pem
	I0719 18:42:14.994220   39480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 18:42:14.994242   39480 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem -> /usr/share/ca-certificates/22028.pem
	I0719 18:42:14.994833   39480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 18:42:15.019216   39480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 18:42:15.042097   39480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 18:42:15.064872   39480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 18:42:15.087820   39480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0719 18:42:15.109471   39480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 18:42:15.131594   39480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 18:42:15.153584   39480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/ha-269108/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 18:42:15.176046   39480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 18:42:15.197403   39480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 18:42:15.219801   39480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 18:42:15.242935   39480 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 18:42:15.258054   39480 ssh_runner.go:195] Run: openssl version
	I0719 18:42:15.263628   39480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 18:42:15.274482   39480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 18:42:15.279364   39480 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 18:42:15.279416   39480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 18:42:15.284668   39480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 18:42:15.293344   39480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 18:42:15.303047   39480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 18:42:15.307380   39480 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 18:42:15.307430   39480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 18:42:15.312762   39480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 18:42:15.321181   39480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 18:42:15.331537   39480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 18:42:15.335986   39480 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 18:42:15.336039   39480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 18:42:15.341384   39480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 18:42:15.349879   39480 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 18:42:15.354058   39480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 18:42:15.359409   39480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 18:42:15.365505   39480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 18:42:15.371105   39480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 18:42:15.376402   39480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 18:42:15.381581   39480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 18:42:15.386638   39480 kubeadm.go:392] StartCluster: {Name:ha-269108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-269108 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.87 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.17 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 18:42:15.386787   39480 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 18:42:15.386845   39480 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 18:42:15.420628   39480 cri.go:89] found id: "1b4b0cf4bc834e140fe853005153f907336f59ad55a6520464db8afd71250afa"
	I0719 18:42:15.420648   39480 cri.go:89] found id: "61e06264256fd9c46c5db2f17e1bfe6d9458fdb6b286b8b5a73b64244b61c6ea"
	I0719 18:42:15.420652   39480 cri.go:89] found id: "dbb001dbe7be6d9a302c79a50318de53b13d624b2495932e885d40b5c3a53e36"
	I0719 18:42:15.420656   39480 cri.go:89] found id: "4e3cdc51f7ed094ed5462f11f5cff75cb3d6d30639d4cb6e22c13a07f94a6433"
	I0719 18:42:15.420659   39480 cri.go:89] found id: "160fdbf709fdfd0919bdc4cac02da0844e0b17c6e6272326ed4de2e8c824f16a"
	I0719 18:42:15.420662   39480 cri.go:89] found id: "1b0808457c5c75d11706b6ffd7814fd4e46226d94c503fb77c9aa9c3739be671"
	I0719 18:42:15.420665   39480 cri.go:89] found id: "61f7a66979140fe738b08e527164c2369f41fd064fa704fc9fee868074b4a452"
	I0719 18:42:15.420668   39480 cri.go:89] found id: "00761e9db17ecfad96376a93ad36118da3b611230214d3f6b730aed8037c9f74"
	I0719 18:42:15.420670   39480 cri.go:89] found id: "c3d91756bd0168a43e8b6f1ae9046710c68f13345e57b69b224b80195614364f"
	I0719 18:42:15.420676   39480 cri.go:89] found id: "65cdebd7fead61555f3eb344ede3b87169b46a3d890899b185210a547160c51d"
	I0719 18:42:15.420695   39480 cri.go:89] found id: ""
	I0719 18:42:15.420735   39480 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 19 18:47:13 ha-269108 crio[3808]: time="2024-07-19 18:47:13.521891914Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e29d9d2d41500bc20420d9caf8ce086e9c6da73f9c9f0a8671a968e0affcc0bf,PodSandboxId:7eeab2b3bad0c4d6e7586d6ad9fc6ce0c0f0429282a496f7a48d87a36952ddcd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721414616272082189,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38aaccd6-4fb9-4744-b502-261ebeaf476a,},Annotations:map[string]string{io.kubernetes.container.hash: 8461d301,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34be3054e3f20da8fc4dd2a1b657f03a54ed610f94ec57efc2a5dfbdf2cde39a,PodSandboxId:62bcad4efd94bca1108500804ec41eef50ea88d4e6cff2bf652f3bedcacb9542,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721414579261633444,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d46eaeb953414d3529b8c9640aa3b7e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6b1a8807,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0f5d4e1754ebac6b06f74389d4db5e5220834e10c045e765af7c6ac0c93bba1,PodSandboxId:cdf045f94dd3e94cdb3b9fb8065fb71a192e22ae8efb808fa7bccc89dc453f26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721414575261965951,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 295bf86c7591ac85ae0841105f8684b3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:820fa6d57d467304530b5aedf8f113e6ebf1c0c817fc0b9a769715d2219876fa,PodSandboxId:d2257256b0b5512cda01326fca07300d928f2889d76d5b38a5288e06096bc230,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721414569530811069,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8qqtr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c0699d4b-e052-40cf-b41c-ab604d64531c,},Annotations:map[string]string{io.kubernetes.container.hash: 5c041215,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8f438a5b11de68af439c31a82cf577b8339e9ab4115915f8f84256b98d21cf4,PodSandboxId:7eeab2b3bad0c4d6e7586d6ad9fc6ce0c0f0429282a496f7a48d87a36952ddcd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721414568264983939,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38aaccd6-4fb9-4744-b502-261ebeaf476a,},Annotations:map[string]string{io.kubernetes.container.hash: 8461d301,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf6013946ad3567c6a2dbcd6b32d3f6d236172aa2cd19f42c8bcfc3114fbd74b,PodSandboxId:135591c805808ce4cb4fbddad10e4d6c528e072f94adb2e39f69c8b1e28dd9f3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721414551291706910,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcefaef2992cd2ee3a947d6974f9e123,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8510f25dcbe4b97ca586c997088a07719140330bfbc0ca10b6ba8aaac2ebe6e,PodSandboxId:b8bac15efc8d4e42b6aed98a9de123b2d10eb77147edb1ff521ac9cc5c252e8f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721414536407869616,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zxh8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1016c72c-a0e0-41fe-8f64-1ba03d7ce1c8,},Annotations:map[string]string{io.kubernetes.container.hash: 7619a7d5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00a3dee6a93924220b2eface2a95a672b43a05ce683a5fa3e15d4a6e0f88a921,PodSandboxId:33491bdbed43991c1f8e942263d87ddfb0ee4b6ff5153a42931df4ef94e12215,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721414536291456893,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4lhkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92211e52-9a67-4b01-a4af-0769596d7103,},Annotations:map[string]string{io.kubernetes.container.hash: a2713f56,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e513814a1603c4b7d88bf130bd6602615f991a041f8bcc423fbaab913b64822,PodSandboxId:0788c2a97f3671154fcab5a8e80fc4b554a92775c43e8543c4ec8330287f0649,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721414536384687335,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-frb69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd2dc6fa-ab54-44e1-b3b1-3e63af76140b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a98894c,io.kubernetes.container.ports: [{\"nam
e\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c99e057b54cd9e137c753fbec9eb0ea6306196975ca9c0f5e4f889daf40a59d,PodSandboxId:08674abf83e5900d70bd790f31552962dfaa5088eac2127184ba4a08460d5eb7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721414536205718187,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-269108,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: dffd3f3edf9c9be1c68e639689e4e06a,},Annotations:map[string]string{io.kubernetes.container.hash: 6829aeb5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81fb866ed689e0dc2eb5f19c1c566cdd0b72053376dc0145a4e981fbd079eb3c,PodSandboxId:cdf045f94dd3e94cdb3b9fb8065fb71a192e22ae8efb808fa7bccc89dc453f26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721414536144949582,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-269108,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 295bf86c7591ac85ae0841105f8684b3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02e7b50680c1dac42f2c682f7ea72ed5143e51f1ecff42c1b0cb04290d516627,PodSandboxId:dcc6afb97407c367e9d3233a79b7b5316b719fc4d53f58596b1a370d2da443ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721414536180254081,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: a30a862431730cad16e447d55a4fb821,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c9be2fa966f212fb25f9c4cabcd726723a1406f5951bbf8b0fc3b0ceb26df7,PodSandboxId:05561f6f664c936be919ff8787423728deeb0966892197b5ed8771b3f4f991ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721414536030037354,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkjns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f97d4b6b-eb80-46c3-9
513-ef6e9af5318e,},Annotations:map[string]string{io.kubernetes.container.hash: b6a228cf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c0f3100700323b35ca91f87077c5ef59192d9768b24ef7730bbb356439b23b,PodSandboxId:62bcad4efd94bca1108500804ec41eef50ea88d4e6cff2bf652f3bedcacb9542,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721414535988012130,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d46eaeb953414d3529b8c9640aa3b7e1,},Ann
otations:map[string]string{io.kubernetes.container.hash: 6b1a8807,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7246971f05346f2dc79fb64d87f9e45c4dbe98d10104f0ef433c848235f3b80,PodSandboxId:8ce5a6a0852e27942f0fc5b9171cce34e14a87ce419545c3842e9353ab6324b8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721414041026695986,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8qqtr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c0699d4b-e052-40cf-b41c-ab604d64531c,},Annot
ations:map[string]string{io.kubernetes.container.hash: 5c041215,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:160fdbf709fdfd0919bdc4cac02da0844e0b17c6e6272326ed4de2e8c824f16a,PodSandboxId:7a1d288c9d262c0228c5bc042b7dcf56aee7adfe173e152f2e29c3f98ad72afa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721413903448938005,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-frb69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd2dc6fa-ab54-44e1-b3b1-3e63af76140b,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a98894c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e3cdc51f7ed094ed5462f11f5cff75cb3d6d30639d4cb6e22c13a07f94a6433,PodSandboxId:3932d100e152d2ea42e6eb0df146808e82dc875697eb60499c0321a97ca99df2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721413903464969133,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zxh8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1016c72c-a0e0-41fe-8f64-1ba03d7ce1c8,},Annotations:map[string]string{io.kubernetes.container.hash: 7619a7d5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b0808457c5c75d11706b6ffd7814fd4e46226d94c503fb77c9aa9c3739be671,PodSandboxId:eb0e9822a0f4e6fa2ff72be57c5b6d30df9a5b7e0c7ee0c4cc4aae2a6faaf91a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721413891286212300,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4lhkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92211e52-9a67-4b01-a4af-0769596d7103,},Annotations:map[string]string{io.kubernetes.container.hash: a2713f56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f7a66979140fe738b08e527164c2369f41fd064fa704fc9fee868074b4a452,PodSandboxId:1f4d197a59cd5ff468ce92a597b19649b7d39296e988d499593484af548c9fb9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721413887206521167,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkjns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f97d4b6b-eb80-46c3-9513-ef6e9af5318e,},Annotations:map[string]string{io.kubernetes.container.hash: b6a228cf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00761e9db17ecfad96376a93ad36118da3b611230214d3f6b730aed8037c9f74,PodSandboxId:9a07a59b0e0e4715d5e80dcbe88bfacb960a6a0b4f96bd8f70828ab198e71bf5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721413866852793774,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a30a862431730cad16e447d55a4fb821,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65cdebd7fead61555f3eb344ede3b87169b46a3d890899b185210a547160c51d,PodSandboxId:4473d4683f2e13119e84131069ee05447abc7ffc6505f1cadf4505b54f90ee42,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721413866792915678,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dffd3f3edf9c9be1c68e639689e4e06a,},Annotations:map[string]string{io.kubernetes.container.hash: 6829aeb5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=98d5f956-38e4-40e6-890a-71ec03390f4d name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 18:47:13 ha-269108 crio[3808]: time="2024-07-19 18:47:13.522810769Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=01f2d923-aae0-4d02-8370-a29aa3e60493 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 19 18:47:13 ha-269108 crio[3808]: time="2024-07-19 18:47:13.522833832Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:e29d9d2d41500bc20420d9caf8ce086e9c6da73f9c9f0a8671a968e0affcc0bf,Verbose:false,}" file="otel-collector/interceptors.go:62" id=af65f009-ee21-47e6-aa50-048189361b7d name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 19 18:47:13 ha-269108 crio[3808]: time="2024-07-19 18:47:13.523674862Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:e29d9d2d41500bc20420d9caf8ce086e9c6da73f9c9f0a8671a968e0affcc0bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},State:CONTAINER_RUNNING,CreatedAt:1721414616334199391,StartedAt:1721414616358874246,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/storage-provisioner:v5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38aaccd6-4fb9-4744-b502-261ebeaf476a,},Annotations:map[string]string{io.kubernetes.container.hash: 8461d301,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination
-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/tmp,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/38aaccd6-4fb9-4744-b502-261ebeaf476a/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/38aaccd6-4fb9-4744-b502-261ebeaf476a/containers/storage-provisioner/b2d0fce1,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/38aaccd6-4fb9-4744-b502-261ebeaf476a/volumes/kubernetes.io~projected/kube-api-access-kq9sd,Readonly:true,SelinuxRelabel:false,Propa
gation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_storage-provisioner_38aaccd6-4fb9-4744-b502-261ebeaf476a/storage-provisioner/4.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=af65f009-ee21-47e6-aa50-048189361b7d name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 19 18:47:13 ha-269108 crio[3808]: time="2024-07-19 18:47:13.523719369Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:d2257256b0b5512cda01326fca07300d928f2889d76d5b38a5288e06096bc230,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-8qqtr,Uid:c0699d4b-e052-40cf-b41c-ab604d64531c,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721414569408210366,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-8qqtr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c0699d4b-e052-40cf-b41c-ab604d64531c,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-19T18:33:57.835432375Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:135591c805808ce4cb4fbddad10e4d6c528e072f94adb2e39f69c8b1e28dd9f3,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-269108,Uid:fcefaef2992cd2ee3a947d6974f9e123,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1721414551197495319,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcefaef2992cd2ee3a947d6974f9e123,},Annotations:map[string]string{kubernetes.io/config.hash: fcefaef2992cd2ee3a947d6974f9e123,kubernetes.io/config.seen: 2024-07-19T18:42:14.581017205Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b8bac15efc8d4e42b6aed98a9de123b2d10eb77147edb1ff521ac9cc5c252e8f,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-zxh8h,Uid:1016c72c-a0e0-41fe-8f64-1ba03d7ce1c8,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1721414535769700378,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-zxh8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1016c72c-a0e0-41fe-8f64-1ba03d7ce1c8,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07
-19T18:31:42.936918301Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7eeab2b3bad0c4d6e7586d6ad9fc6ce0c0f0429282a496f7a48d87a36952ddcd,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:38aaccd6-4fb9-4744-b502-261ebeaf476a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721414535695638199,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38aaccd6-4fb9-4744-b502-261ebeaf476a,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":
\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-19T18:31:42.933076031Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0788c2a97f3671154fcab5a8e80fc4b554a92775c43e8543c4ec8330287f0649,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-frb69,Uid:bd2dc6fa-ab54-44e1-b3b1-3e63af76140b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721414535678739604,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-frb69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd2dc6fa-ab54-44e1-b3b1-3e63af76140b,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/confi
g.seen: 2024-07-19T18:31:42.926192286Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dcc6afb97407c367e9d3233a79b7b5316b719fc4d53f58596b1a370d2da443ca,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-269108,Uid:a30a862431730cad16e447d55a4fb821,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721414535670647286,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a30a862431730cad16e447d55a4fb821,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a30a862431730cad16e447d55a4fb821,kubernetes.io/config.seen: 2024-07-19T18:31:13.194450572Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:08674abf83e5900d70bd790f31552962dfaa5088eac2127184ba4a08460d5eb7,Metadata:&PodSandboxMetadata{Name:etcd-ha-269108,Uid:dffd3f3edf9c9be1c68e639689e4e06a,Namespace:kube-system,Attempt:1,},State:SANDBOX_REA
DY,CreatedAt:1721414535668131811,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dffd3f3edf9c9be1c68e639689e4e06a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.163:2379,kubernetes.io/config.hash: dffd3f3edf9c9be1c68e639689e4e06a,kubernetes.io/config.seen: 2024-07-19T18:31:13.194452224Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:62bcad4efd94bca1108500804ec41eef50ea88d4e6cff2bf652f3bedcacb9542,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-269108,Uid:d46eaeb953414d3529b8c9640aa3b7e1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721414535668044925,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d46eaeb953414d3529b8
c9640aa3b7e1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.163:8443,kubernetes.io/config.hash: d46eaeb953414d3529b8c9640aa3b7e1,kubernetes.io/config.seen: 2024-07-19T18:31:13.194452996Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cdf045f94dd3e94cdb3b9fb8065fb71a192e22ae8efb808fa7bccc89dc453f26,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-269108,Uid:295bf86c7591ac85ae0841105f8684b3,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721414535662874116,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 295bf86c7591ac85ae0841105f8684b3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 295bf86c7591ac85ae0841105f8684b3,kubernetes.io/config.seen: 2024-07-19T18:31:13.194447048Z,kubernetes.io/
config.source: file,},RuntimeHandler:,},&PodSandbox{Id:33491bdbed43991c1f8e942263d87ddfb0ee4b6ff5153a42931df4ef94e12215,Metadata:&PodSandboxMetadata{Name:kindnet-4lhkr,Uid:92211e52-9a67-4b01-a4af-0769596d7103,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721414535659693693,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-4lhkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92211e52-9a67-4b01-a4af-0769596d7103,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-19T18:31:26.604421868Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:05561f6f664c936be919ff8787423728deeb0966892197b5ed8771b3f4f991ed,Metadata:&PodSandboxMetadata{Name:kube-proxy-pkjns,Uid:f97d4b6b-eb80-46c3-9513-ef6e9af5318e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721414535655812203,Labels:map[string]string{cont
roller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-pkjns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f97d4b6b-eb80-46c3-9513-ef6e9af5318e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-19T18:31:26.601826870Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8ce5a6a0852e27942f0fc5b9171cce34e14a87ce419545c3842e9353ab6324b8,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-8qqtr,Uid:c0699d4b-e052-40cf-b41c-ab604d64531c,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721414038155906256,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-8qqtr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c0699d4b-e052-40cf-b41c-ab604d64531c,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-19T18:33:57.835432375Z,kubernetes.io/config.source
: api,},RuntimeHandler:,},&PodSandbox{Id:3932d100e152d2ea42e6eb0df146808e82dc875697eb60499c0321a97ca99df2,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-zxh8h,Uid:1016c72c-a0e0-41fe-8f64-1ba03d7ce1c8,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721413903249875836,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-zxh8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1016c72c-a0e0-41fe-8f64-1ba03d7ce1c8,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-19T18:31:42.936918301Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7a1d288c9d262c0228c5bc042b7dcf56aee7adfe173e152f2e29c3f98ad72afa,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-frb69,Uid:bd2dc6fa-ab54-44e1-b3b1-3e63af76140b,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721413903232646742,Labels:map[string]string{io.kubernetes.container.name: POD,io.ku
bernetes.pod.name: coredns-7db6d8ff4d-frb69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd2dc6fa-ab54-44e1-b3b1-3e63af76140b,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-19T18:31:42.926192286Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1f4d197a59cd5ff468ce92a597b19649b7d39296e988d499593484af548c9fb9,Metadata:&PodSandboxMetadata{Name:kube-proxy-pkjns,Uid:f97d4b6b-eb80-46c3-9513-ef6e9af5318e,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721413886928593255,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-pkjns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f97d4b6b-eb80-46c3-9513-ef6e9af5318e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-19T18:31:26.601826870Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&
PodSandbox{Id:eb0e9822a0f4e6fa2ff72be57c5b6d30df9a5b7e0c7ee0c4cc4aae2a6faaf91a,Metadata:&PodSandboxMetadata{Name:kindnet-4lhkr,Uid:92211e52-9a67-4b01-a4af-0769596d7103,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721413886913548550,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-4lhkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92211e52-9a67-4b01-a4af-0769596d7103,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-19T18:31:26.604421868Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9a07a59b0e0e4715d5e80dcbe88bfacb960a6a0b4f96bd8f70828ab198e71bf5,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-269108,Uid:a30a862431730cad16e447d55a4fb821,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721413866614602212,Labels:map[string]string{component: kube-scheduler,io.kubern
etes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a30a862431730cad16e447d55a4fb821,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a30a862431730cad16e447d55a4fb821,kubernetes.io/config.seen: 2024-07-19T18:31:06.145281135Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4473d4683f2e13119e84131069ee05447abc7ffc6505f1cadf4505b54f90ee42,Metadata:&PodSandboxMetadata{Name:etcd-ha-269108,Uid:dffd3f3edf9c9be1c68e639689e4e06a,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721413866609110086,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dffd3f3edf9c9be1c68e639689e4e06a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.163:2379,kubernetes.io/config.hash: dffd3f3e
df9c9be1c68e639689e4e06a,kubernetes.io/config.seen: 2024-07-19T18:31:06.145286556Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=01f2d923-aae0-4d02-8370-a29aa3e60493 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 19 18:47:13 ha-269108 crio[3808]: time="2024-07-19 18:47:13.524978597Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:34be3054e3f20da8fc4dd2a1b657f03a54ed610f94ec57efc2a5dfbdf2cde39a,Verbose:false,}" file="otel-collector/interceptors.go:62" id=add318a1-c496-4202-8ddc-60efdd1197ff name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 19 18:47:13 ha-269108 crio[3808]: time="2024-07-19 18:47:13.525240812Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:34be3054e3f20da8fc4dd2a1b657f03a54ed610f94ec57efc2a5dfbdf2cde39a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},State:CONTAINER_RUNNING,CreatedAt:1721414579309842458,StartedAt:1721414579361453279,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.30.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d46eaeb953414d3529b8c9640aa3b7e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6b1a8807,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/d46eaeb953414d3529b8c9640aa3b7e1/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/d46eaeb953414d3529b8c9640aa3b7e1/containers/kube-apiserver/7c43c80e,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib
/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-ha-269108_d46eaeb953414d3529b8c9640aa3b7e1/kube-apiserver/3.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=add318a1-c496-4202-8ddc-60efdd1197ff name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 19 18:47:13 ha-269108 crio[3808]: time="2024-07-19 18:47:13.525765127Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:a0f5d4e1754ebac6b06f74389d4db5e5220834e10c045e765af7c6ac0c93bba1,Verbose:false,}" file="otel-collector/interceptors.go:62" id=367379c1-46d4-4549-a8b1-0edbf54e5d28 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 19 18:47:13 ha-269108 crio[3808]: time="2024-07-19 18:47:13.525960903Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:a0f5d4e1754ebac6b06f74389d4db5e5220834e10c045e765af7c6ac0c93bba1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1721414575299188543,StartedAt:1721414575343216368,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.30.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 295bf86c7591ac85ae0841105f8684b3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/295bf86c7591ac85ae0841105f8684b3/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/295bf86c7591ac85ae0841105f8684b3/containers/kube-controller-manager/825ab16b,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]
*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-controller-manager-ha-269108_295bf86c7591ac85ae0841105f8684b3/kube-controller-manager/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:204,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*
HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=367379c1-46d4-4549-a8b1-0edbf54e5d28 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 19 18:47:13 ha-269108 crio[3808]: time="2024-07-19 18:47:13.526446108Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:820fa6d57d467304530b5aedf8f113e6ebf1c0c817fc0b9a769715d2219876fa,Verbose:false,}" file="otel-collector/interceptors.go:62" id=d6824658-fa3c-49d9-b6ae-e967080b1a96 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 19 18:47:13 ha-269108 crio[3808]: time="2024-07-19 18:47:13.526623743Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:820fa6d57d467304530b5aedf8f113e6ebf1c0c817fc0b9a769715d2219876fa,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1721414569560354010,StartedAt:1721414569581866179,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox:1.28,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8qqtr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c0699d4b-e052-40cf-b41c-ab604d64531c,},Annotations:map[string]string{io.kubernetes.container.hash: 5c041215,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/c0699d4b-e052-40cf-b41c-ab604d64531c/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/c0699d4b-e052-40cf-b41c-ab604d64531c/containers/busybox/a569a016,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/c0699d4b-e052-40cf-b41c-ab604d64531c/volumes/kubernetes.io~projected/kube-api-access-2bnml,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/default_busybox-fc5497c4f-8qqtr_c0699d4b-e052-40cf-b41c-ab604d64531c/busybox/1.log,Resources:&Container
Resources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=d6824658-fa3c-49d9-b6ae-e967080b1a96 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 19 18:47:13 ha-269108 crio[3808]: time="2024-07-19 18:47:13.527259302Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:cf6013946ad3567c6a2dbcd6b32d3f6d236172aa2cd19f42c8bcfc3114fbd74b,Verbose:false,}" file="otel-collector/interceptors.go:62" id=343648e9-8741-4970-9375-8008258fce01 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 19 18:47:13 ha-269108 crio[3808]: time="2024-07-19 18:47:13.527394858Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:cf6013946ad3567c6a2dbcd6b32d3f6d236172aa2cd19f42c8bcfc3114fbd74b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1721414551330347437,StartedAt:1721414551360710065,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip:v0.8.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcefaef2992cd2ee3a947d6974f9e123,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminat
ionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/fcefaef2992cd2ee3a947d6974f9e123/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/fcefaef2992cd2ee3a947d6974f9e123/containers/kube-vip/f8dbd7b8,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/admin.conf,HostPath:/etc/kubernetes/admin.conf,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-vip-ha-269108_fcefaef2992cd2ee3a947d6974f9e123/kube-vip/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000
,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=343648e9-8741-4970-9375-8008258fce01 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 19 18:47:13 ha-269108 crio[3808]: time="2024-07-19 18:47:13.527922917Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:f8510f25dcbe4b97ca586c997088a07719140330bfbc0ca10b6ba8aaac2ebe6e,Verbose:false,}" file="otel-collector/interceptors.go:62" id=55c5e477-0826-4d36-914e-bf04ba8c2866 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 19 18:47:13 ha-269108 crio[3808]: time="2024-07-19 18:47:13.528102922Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:f8510f25dcbe4b97ca586c997088a07719140330bfbc0ca10b6ba8aaac2ebe6e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1721414536658381005,StartedAt:1721414536723267430,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.11.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zxh8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1016c72c-a0e0-41fe-8f64-1ba03d7ce1c8,},Annotations:map[string]string{io.kubernetes.container.hash: 7619a7d5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"container
Port\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/1016c72c-a0e0-41fe-8f64-1ba03d7ce1c8/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/1016c72c-a0e0-41fe-8f64-1ba03d7ce1c8/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/1016c72c-a0e0-41fe-8f64-1ba03d7ce1c8/containers/coredns/b30a2cdf,Readonly:false,SelinuxRelabel:false,Propagation:PROPAG
ATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/1016c72c-a0e0-41fe-8f64-1ba03d7ce1c8/volumes/kubernetes.io~projected/kube-api-access-p2cvr,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-7db6d8ff4d-zxh8h_1016c72c-a0e0-41fe-8f64-1ba03d7ce1c8/coredns/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:967,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=55c5e477-0826-4d36-914e-bf04ba8c2866 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 19 18:47:13 ha-269108 crio[3808]: time="2024-07-19 18:47:13.528693184Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:00a3dee6a93924220b2eface2a95a672b43a05ce683a5fa3e15d4a6e0f88a921,Verbose:false,}" file="otel-collector/interceptors.go:62" id=ef260dd4-507f-4dda-a3e9-2c15a701a906 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 19 18:47:13 ha-269108 crio[3808]: time="2024-07-19 18:47:13.528905121Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:00a3dee6a93924220b2eface2a95a672b43a05ce683a5fa3e15d4a6e0f88a921,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1721414536565245161,StartedAt:1721414536642871399,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:docker.io/kindest/kindnetd:v20240719-e7903573,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4lhkr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92211e52-9a67-4b01-a4af-0769596d7103,},Annotations:map[string]string{io.kubernetes.container.hash: a2713f56,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/92211e52-9a67-4b01-a4af-0769596d7103/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/92211e52-9a67-4b01-a4af-0769596d7103/containers/kindnet-cni/07ae74bb,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/cni/net.d,HostPath:/etc/c
ni/net.d,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/92211e52-9a67-4b01-a4af-0769596d7103/volumes/kubernetes.io~projected/kube-api-access-vbc5b,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kindnet-4lhkr_92211e52-9a67-4b01-a4af-0769596d7103/kindnet-cni/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:10000,CpuShares:102,MemoryLimitInBytes:52428800,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:52428800,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=ef260dd4-507f-4dda-a3e9-2c15a701a906
name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 19 18:47:13 ha-269108 crio[3808]: time="2024-07-19 18:47:13.529483463Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:6e513814a1603c4b7d88bf130bd6602615f991a041f8bcc423fbaab913b64822,Verbose:false,}" file="otel-collector/interceptors.go:62" id=0052cd71-9e93-4157-ad19-42a1b47a0b87 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 19 18:47:13 ha-269108 crio[3808]: time="2024-07-19 18:47:13.529631078Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:6e513814a1603c4b7d88bf130bd6602615f991a041f8bcc423fbaab913b64822,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1721414536530802762,StartedAt:1721414536570068994,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.11.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-frb69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd2dc6fa-ab54-44e1-b3b1-3e63af76140b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a98894c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"container
Port\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/bd2dc6fa-ab54-44e1-b3b1-3e63af76140b/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/bd2dc6fa-ab54-44e1-b3b1-3e63af76140b/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/bd2dc6fa-ab54-44e1-b3b1-3e63af76140b/containers/coredns/1cad6358,Readonly:false,SelinuxRelabel:false,Propagation:PROPAG
ATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/bd2dc6fa-ab54-44e1-b3b1-3e63af76140b/volumes/kubernetes.io~projected/kube-api-access-x4cqd,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-7db6d8ff4d-frb69_bd2dc6fa-ab54-44e1-b3b1-3e63af76140b/coredns/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:967,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=0052cd71-9e93-4157-ad19-42a1b47a0b87 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 19 18:47:13 ha-269108 crio[3808]: time="2024-07-19 18:47:13.530013761Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:8c99e057b54cd9e137c753fbec9eb0ea6306196975ca9c0f5e4f889daf40a59d,Verbose:false,}" file="otel-collector/interceptors.go:62" id=6b430a1f-f5da-465c-9600-df79f989cd0c name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 19 18:47:13 ha-269108 crio[3808]: time="2024-07-19 18:47:13.530114727Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:8c99e057b54cd9e137c753fbec9eb0ea6306196975ca9c0f5e4f889daf40a59d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1721414536526634083,StartedAt:1721414536786672336,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.12-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dffd3f3edf9c9be1c68e639689e4e06a,},Annotations:map[string]string{io.kubernetes.container.hash: 6829aeb5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/dffd3f3edf9c9be1c68e639689e4e06a/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/dffd3f3edf9c9be1c68e639689e4e06a/containers/etcd/2cddbf9e,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_etcd-ha-269108_dffd3
f3edf9c9be1c68e639689e4e06a/etcd/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=6b430a1f-f5da-465c-9600-df79f989cd0c name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 19 18:47:13 ha-269108 crio[3808]: time="2024-07-19 18:47:13.530616927Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:02e7b50680c1dac42f2c682f7ea72ed5143e51f1ecff42c1b0cb04290d516627,Verbose:false,}" file="otel-collector/interceptors.go:62" id=efa89ec0-c2e1-4d62-a52e-dc8bc51f8609 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 19 18:47:13 ha-269108 crio[3808]: time="2024-07-19 18:47:13.530845996Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:02e7b50680c1dac42f2c682f7ea72ed5143e51f1ecff42c1b0cb04290d516627,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1721414536317213172,StartedAt:1721414536479726885,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.30.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-269108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a30a862431730cad16e447d55a4fb821,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/a30a862431730cad16e447d55a4fb821/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/a30a862431730cad16e447d55a4fb821/containers/kube-scheduler/d82765e4,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-ha-269108_a30a862431730cad16e447d55a4fb821/kube-scheduler/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,
CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=efa89ec0-c2e1-4d62-a52e-dc8bc51f8609 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 19 18:47:13 ha-269108 crio[3808]: time="2024-07-19 18:47:13.531340949Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:01c9be2fa966f212fb25f9c4cabcd726723a1406f5951bbf8b0fc3b0ceb26df7,Verbose:false,}" file="otel-collector/interceptors.go:62" id=8b08e640-5436-41ce-8439-1431daaee8e3 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 19 18:47:13 ha-269108 crio[3808]: time="2024-07-19 18:47:13.531493596Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:01c9be2fa966f212fb25f9c4cabcd726723a1406f5951bbf8b0fc3b0ceb26df7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1721414536124984578,StartedAt:1721414536201125903,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.30.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkjns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f97d4b6b-eb80-46c3-9513-ef6e9af5318e,},Annotations:map[string]string{io.kubernetes.container.hash: b6a228cf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/f97d4b6b-eb80-46c3-9513-ef6e9af5318e/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/f97d4b6b-eb80-46c3-9513-ef6e9af5318e/containers/kube-proxy/5b2f2b43,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/kube-proxy,HostPath:/var/lib/kub
elet/pods/f97d4b6b-eb80-46c3-9513-ef6e9af5318e/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/f97d4b6b-eb80-46c3-9513-ef6e9af5318e/volumes/kubernetes.io~projected/kube-api-access-slmwj,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-proxy-pkjns_f97d4b6b-eb80-46c3-9513-ef6e9af5318e/kube-proxy/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collecto
r/interceptors.go:74" id=8b08e640-5436-41ce-8439-1431daaee8e3 name=/runtime.v1.RuntimeService/ContainerStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e29d9d2d41500       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       4                   7eeab2b3bad0c       storage-provisioner
	34be3054e3f20       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            3                   62bcad4efd94b       kube-apiserver-ha-269108
	a0f5d4e1754eb       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   2                   cdf045f94dd3e       kube-controller-manager-ha-269108
	820fa6d57d467       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   d2257256b0b55       busybox-fc5497c4f-8qqtr
	a8f438a5b11de       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   7eeab2b3bad0c       storage-provisioner
	cf6013946ad35       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   135591c805808       kube-vip-ha-269108
	f8510f25dcbe4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   b8bac15efc8d4       coredns-7db6d8ff4d-zxh8h
	6e513814a1603       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   0788c2a97f367       coredns-7db6d8ff4d-frb69
	00a3dee6a9392       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      4 minutes ago       Running             kindnet-cni               1                   33491bdbed439       kindnet-4lhkr
	8c99e057b54cd       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   08674abf83e59       etcd-ha-269108
	02e7b50680c1d       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            1                   dcc6afb97407c       kube-scheduler-ha-269108
	81fb866ed689e       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Exited              kube-controller-manager   1                   cdf045f94dd3e       kube-controller-manager-ha-269108
	01c9be2fa966f       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      4 minutes ago       Running             kube-proxy                1                   05561f6f664c9       kube-proxy-pkjns
	76c0f31007003       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Exited              kube-apiserver            2                   62bcad4efd94b       kube-apiserver-ha-269108
	f7246971f0534       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   8ce5a6a0852e2       busybox-fc5497c4f-8qqtr
	4e3cdc51f7ed0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   3932d100e152d       coredns-7db6d8ff4d-zxh8h
	160fdbf709fdf       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   7a1d288c9d262       coredns-7db6d8ff4d-frb69
	1b0808457c5c7       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    15 minutes ago      Exited              kindnet-cni               0                   eb0e9822a0f4e       kindnet-4lhkr
	61f7a66979140       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      15 minutes ago      Exited              kube-proxy                0                   1f4d197a59cd5       kube-proxy-pkjns
	00761e9db17ec       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      16 minutes ago      Exited              kube-scheduler            0                   9a07a59b0e0e4       kube-scheduler-ha-269108
	65cdebd7fead6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   4473d4683f2e1       etcd-ha-269108
	
	
	==> coredns [160fdbf709fdfd0919bdc4cac02da0844e0b17c6e6272326ed4de2e8c824f16a] <==
	[INFO] 10.244.2.2:41729 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000195097s
	[INFO] 10.244.2.2:54527 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000110807s
	[INFO] 10.244.0.4:56953 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000073764s
	[INFO] 10.244.0.4:52941 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000231699s
	[INFO] 10.244.1.2:55601 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109961s
	[INFO] 10.244.1.2:35949 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100008s
	[INFO] 10.244.1.2:37921 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013057s
	[INFO] 10.244.2.2:46987 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000135522s
	[INFO] 10.244.2.2:49412 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111732s
	[INFO] 10.244.0.4:60902 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000080561s
	[INFO] 10.244.0.4:46371 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000144185s
	[INFO] 10.244.1.2:42114 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000113886s
	[INFO] 10.244.1.2:42251 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000112767s
	[INFO] 10.244.1.2:32914 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121034s
	[INFO] 10.244.2.2:44583 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109669s
	[INFO] 10.244.2.2:43221 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000085803s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1865&timeout=8m43s&timeoutSeconds=523&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1865&timeout=9m22s&timeoutSeconds=562&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [4e3cdc51f7ed094ed5462f11f5cff75cb3d6d30639d4cb6e22c13a07f94a6433] <==
	[INFO] 10.244.1.2:37958 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000120338s
	[INFO] 10.244.1.2:41663 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000057406s
	[INFO] 10.244.2.2:39895 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001764061s
	[INFO] 10.244.2.2:32797 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000140596s
	[INFO] 10.244.2.2:41374 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014218s
	[INFO] 10.244.2.2:38111 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001248331s
	[INFO] 10.244.2.2:40563 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000167068s
	[INFO] 10.244.2.2:35228 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000156257s
	[INFO] 10.244.0.4:60265 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000081089s
	[INFO] 10.244.0.4:46991 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004503s
	[INFO] 10.244.1.2:54120 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097136s
	[INFO] 10.244.2.2:36674 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166666s
	[INFO] 10.244.2.2:45715 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115611s
	[INFO] 10.244.0.4:49067 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123058s
	[INFO] 10.244.0.4:53779 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000117437s
	[INFO] 10.244.1.2:35193 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133641s
	[INFO] 10.244.2.2:48409 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000071901s
	[INFO] 10.244.2.2:34803 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000085512s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1865&timeout=7m40s&timeoutSeconds=460&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6e513814a1603c4b7d88bf130bd6602615f991a041f8bcc423fbaab913b64822] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:37486->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1033758847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Jul-2024 18:42:28.055) (total time: 10698ms):
	Trace[1033758847]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:37486->10.96.0.1:443: read: connection reset by peer 10697ms (18:42:38.753)
	Trace[1033758847]: [10.698408486s] [10.698408486s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:37486->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [f8510f25dcbe4b97ca586c997088a07719140330bfbc0ca10b6ba8aaac2ebe6e] <==
	Trace[2078807794]: [13.113177825s] [13.113177825s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:42372->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:42388->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[653282894]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Jul-2024 18:42:28.523) (total time: 12904ms):
	Trace[653282894]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:42388->10.96.0.1:443: read: connection reset by peer 12904ms (18:42:41.428)
	Trace[653282894]: [12.904551114s] [12.904551114s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:42388->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-269108
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-269108
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221
	                    minikube.k8s.io/name=ha-269108
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T18_31_14_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 18:31:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-269108
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 18:47:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 18:45:51 +0000   Fri, 19 Jul 2024 18:45:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 18:45:51 +0000   Fri, 19 Jul 2024 18:45:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 18:45:51 +0000   Fri, 19 Jul 2024 18:45:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 18:45:51 +0000   Fri, 19 Jul 2024 18:45:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.163
	  Hostname:    ha-269108
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e79eb460340347cf89b27475ae92f17a
	  System UUID:                e79eb460-3403-47cf-89b2-7475ae92f17a
	  Boot ID:                    54c61684-08fe-4593-a142-b4110f9ebe97
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8qqtr              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-frb69             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-zxh8h             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-ha-269108                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-4lhkr                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-269108             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-269108    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-pkjns                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-269108             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-269108                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m37s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 15m                kube-proxy       
	  Normal   Starting                 4m12s              kube-proxy       
	  Normal   NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 16m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           15m                node-controller  Node ha-269108 event: Registered Node ha-269108 in Controller
	  Normal   RegisteredNode           14m                node-controller  Node ha-269108 event: Registered Node ha-269108 in Controller
	  Normal   RegisteredNode           13m                node-controller  Node ha-269108 event: Registered Node ha-269108 in Controller
	  Warning  ContainerGCFailed        6m                 kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m3s               node-controller  Node ha-269108 event: Registered Node ha-269108 in Controller
	  Normal   RegisteredNode           3m59s              node-controller  Node ha-269108 event: Registered Node ha-269108 in Controller
	  Normal   RegisteredNode           3m8s               node-controller  Node ha-269108 event: Registered Node ha-269108 in Controller
	  Normal   NodeNotReady             104s               node-controller  Node ha-269108 status is now: NodeNotReady
	  Normal   NodeHasSufficientPID     82s (x2 over 16m)  kubelet          Node ha-269108 status is now: NodeHasSufficientPID
	  Normal   NodeReady                82s (x2 over 15m)  kubelet          Node ha-269108 status is now: NodeReady
	  Normal   NodeHasNoDiskPressure    82s (x2 over 16m)  kubelet          Node ha-269108 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  82s (x2 over 16m)  kubelet          Node ha-269108 status is now: NodeHasSufficientMemory
	
	
	Name:               ha-269108-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-269108-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221
	                    minikube.k8s.io/name=ha-269108
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T18_32_16_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 18:32:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-269108-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 18:47:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 18:43:43 +0000   Fri, 19 Jul 2024 18:43:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 18:43:43 +0000   Fri, 19 Jul 2024 18:43:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 18:43:43 +0000   Fri, 19 Jul 2024 18:43:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 18:43:43 +0000   Fri, 19 Jul 2024 18:43:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.87
	  Hostname:    ha-269108-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bd8856bcc7e74f5db8563df427c95529
	  System UUID:                bd8856bc-c7e7-4f5d-b856-3df427c95529
	  Boot ID:                    6d32c45e-6958-4023-8d00-acf926738400
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hds6j                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-269108-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-fs5wk                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-269108-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-269108-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-vr7bz                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-269108-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-269108-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m51s                  kube-proxy       
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-269108-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-269108-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-269108-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                    node-controller  Node ha-269108-m02 event: Registered Node ha-269108-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-269108-m02 event: Registered Node ha-269108-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-269108-m02 event: Registered Node ha-269108-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-269108-m02 status is now: NodeNotReady
	  Normal  Starting                 4m39s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m39s (x8 over 4m39s)  kubelet          Node ha-269108-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m39s (x8 over 4m39s)  kubelet          Node ha-269108-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m39s (x7 over 4m39s)  kubelet          Node ha-269108-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m3s                   node-controller  Node ha-269108-m02 event: Registered Node ha-269108-m02 in Controller
	  Normal  RegisteredNode           3m59s                  node-controller  Node ha-269108-m02 event: Registered Node ha-269108-m02 in Controller
	  Normal  RegisteredNode           3m8s                   node-controller  Node ha-269108-m02 event: Registered Node ha-269108-m02 in Controller
	
	
	Name:               ha-269108-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-269108-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221
	                    minikube.k8s.io/name=ha-269108
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T18_34_34_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 18:34:34 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-269108-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 18:44:46 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 19 Jul 2024 18:44:26 +0000   Fri, 19 Jul 2024 18:45:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 19 Jul 2024 18:44:26 +0000   Fri, 19 Jul 2024 18:45:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 19 Jul 2024 18:44:26 +0000   Fri, 19 Jul 2024 18:45:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 19 Jul 2024 18:44:26 +0000   Fri, 19 Jul 2024 18:45:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.17
	  Hostname:    ha-269108-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 db189232948d49d08567fb37977226c0
	  System UUID:                db189232-948d-49d0-8567-fb37977226c0
	  Boot ID:                    1a39f02c-c82c-49de-b33a-f6daf6a96016
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-r678j    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-2cplc              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-qfjxp           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 2m43s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-269108-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-269108-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-269108-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-269108-m04 event: Registered Node ha-269108-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-269108-m04 event: Registered Node ha-269108-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-269108-m04 event: Registered Node ha-269108-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-269108-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m4s                   node-controller  Node ha-269108-m04 event: Registered Node ha-269108-m04 in Controller
	  Normal   RegisteredNode           4m                     node-controller  Node ha-269108-m04 event: Registered Node ha-269108-m04 in Controller
	  Normal   NodeNotReady             3m24s                  node-controller  Node ha-269108-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m9s                   node-controller  Node ha-269108-m04 event: Registered Node ha-269108-m04 in Controller
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m48s (x2 over 2m48s)  kubelet          Node ha-269108-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m48s (x2 over 2m48s)  kubelet          Node ha-269108-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x2 over 2m48s)  kubelet          Node ha-269108-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m48s                  kubelet          Node ha-269108-m04 has been rebooted, boot id: 1a39f02c-c82c-49de-b33a-f6daf6a96016
	  Normal   NodeReady                2m48s                  kubelet          Node ha-269108-m04 status is now: NodeReady
	  Normal   NodeNotReady             105s                   node-controller  Node ha-269108-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000011] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +11.238779] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.059116] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053977] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.173547] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.128447] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.273873] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[Jul19 18:31] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +3.987459] systemd-fstab-generator[944]: Ignoring "noauto" option for root device
	[  +0.062161] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.240611] systemd-fstab-generator[1364]: Ignoring "noauto" option for root device
	[  +0.081692] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.045525] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.095803] kauditd_printk_skb: 36 callbacks suppressed
	[Jul19 18:32] kauditd_printk_skb: 24 callbacks suppressed
	[Jul19 18:42] systemd-fstab-generator[3705]: Ignoring "noauto" option for root device
	[  +0.156997] systemd-fstab-generator[3717]: Ignoring "noauto" option for root device
	[  +0.187872] systemd-fstab-generator[3754]: Ignoring "noauto" option for root device
	[  +0.147767] systemd-fstab-generator[3766]: Ignoring "noauto" option for root device
	[  +0.292465] systemd-fstab-generator[3794]: Ignoring "noauto" option for root device
	[  +0.753848] systemd-fstab-generator[3905]: Ignoring "noauto" option for root device
	[  +6.178344] kauditd_printk_skb: 222 callbacks suppressed
	[ +33.928680] kauditd_printk_skb: 5 callbacks suppressed
	[Jul19 18:43] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [65cdebd7fead61555f3eb344ede3b87169b46a3d890899b185210a547160c51d] <==
	{"level":"info","ts":"2024-07-19T18:40:41.858632Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a8a86752a40bcef4 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-19T18:40:41.858751Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a8a86752a40bcef4 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-19T18:40:41.858798Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a8a86752a40bcef4 received MsgPreVoteResp from a8a86752a40bcef4 at term 2"}
	{"level":"info","ts":"2024-07-19T18:40:41.858832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a8a86752a40bcef4 [logterm: 2, index: 2203] sent MsgPreVote request to 17e1ba8abf8c6143 at term 2"}
	{"level":"info","ts":"2024-07-19T18:40:41.858858Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a8a86752a40bcef4 [logterm: 2, index: 2203] sent MsgPreVote request to 74443fbe11e84754 at term 2"}
	{"level":"warn","ts":"2024-07-19T18:40:42.069524Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.163:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T18:40:42.069634Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.163:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-19T18:40:42.071243Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"a8a86752a40bcef4","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-19T18:40:42.071444Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"17e1ba8abf8c6143"}
	{"level":"info","ts":"2024-07-19T18:40:42.071472Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"17e1ba8abf8c6143"}
	{"level":"info","ts":"2024-07-19T18:40:42.071501Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"17e1ba8abf8c6143"}
	{"level":"info","ts":"2024-07-19T18:40:42.071596Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143"}
	{"level":"info","ts":"2024-07-19T18:40:42.071643Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143"}
	{"level":"info","ts":"2024-07-19T18:40:42.071689Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a8a86752a40bcef4","remote-peer-id":"17e1ba8abf8c6143"}
	{"level":"info","ts":"2024-07-19T18:40:42.071713Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"17e1ba8abf8c6143"}
	{"level":"info","ts":"2024-07-19T18:40:42.07172Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"74443fbe11e84754"}
	{"level":"info","ts":"2024-07-19T18:40:42.071729Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"74443fbe11e84754"}
	{"level":"info","ts":"2024-07-19T18:40:42.071757Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"74443fbe11e84754"}
	{"level":"info","ts":"2024-07-19T18:40:42.07182Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a8a86752a40bcef4","remote-peer-id":"74443fbe11e84754"}
	{"level":"info","ts":"2024-07-19T18:40:42.071855Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a8a86752a40bcef4","remote-peer-id":"74443fbe11e84754"}
	{"level":"info","ts":"2024-07-19T18:40:42.071893Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a8a86752a40bcef4","remote-peer-id":"74443fbe11e84754"}
	{"level":"info","ts":"2024-07-19T18:40:42.071905Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"74443fbe11e84754"}
	{"level":"info","ts":"2024-07-19T18:40:42.074475Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.163:2380"}
	{"level":"info","ts":"2024-07-19T18:40:42.074649Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.163:2380"}
	{"level":"info","ts":"2024-07-19T18:40:42.074683Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-269108","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.163:2380"],"advertise-client-urls":["https://192.168.39.163:2379"]}
	
	
	==> etcd [8c99e057b54cd9e137c753fbec9eb0ea6306196975ca9c0f5e4f889daf40a59d] <==
	{"level":"info","ts":"2024-07-19T18:43:46.117275Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"a8a86752a40bcef4","remote-peer-id":"74443fbe11e84754"}
	{"level":"info","ts":"2024-07-19T18:43:46.124962Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"a8a86752a40bcef4","to":"74443fbe11e84754","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-19T18:43:46.124998Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"a8a86752a40bcef4","remote-peer-id":"74443fbe11e84754"}
	{"level":"warn","ts":"2024-07-19T18:43:47.020401Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"74443fbe11e84754","rtt":"0s","error":"dial tcp 192.168.39.111:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-19T18:43:47.020416Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"74443fbe11e84754","rtt":"0s","error":"dial tcp 192.168.39.111:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-19T18:43:50.959071Z","caller":"traceutil/trace.go:171","msg":"trace[1885181845] transaction","detail":"{read_only:false; response_revision:2285; number_of_response:1; }","duration":"146.644003ms","start":"2024-07-19T18:43:50.812403Z","end":"2024-07-19T18:43:50.959047Z","steps":["trace[1885181845] 'process raft request'  (duration: 146.53966ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T18:44:40.006134Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.111:56950","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-07-19T18:44:40.022095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a8a86752a40bcef4 switched to configuration voters=(1720861637714141507 12153077199096499956)"}
	{"level":"info","ts":"2024-07-19T18:44:40.023812Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"e373eafcd5903e51","local-member-id":"a8a86752a40bcef4","removed-remote-peer-id":"74443fbe11e84754","removed-remote-peer-urls":["https://192.168.39.111:2380"]}
	{"level":"info","ts":"2024-07-19T18:44:40.023937Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"74443fbe11e84754"}
	{"level":"warn","ts":"2024-07-19T18:44:40.02408Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"74443fbe11e84754"}
	{"level":"info","ts":"2024-07-19T18:44:40.024114Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"74443fbe11e84754"}
	{"level":"warn","ts":"2024-07-19T18:44:40.024521Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"74443fbe11e84754"}
	{"level":"info","ts":"2024-07-19T18:44:40.02457Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"74443fbe11e84754"}
	{"level":"info","ts":"2024-07-19T18:44:40.024911Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a8a86752a40bcef4","remote-peer-id":"74443fbe11e84754"}
	{"level":"warn","ts":"2024-07-19T18:44:40.025254Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a8a86752a40bcef4","remote-peer-id":"74443fbe11e84754","error":"context canceled"}
	{"level":"warn","ts":"2024-07-19T18:44:40.02533Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"74443fbe11e84754","error":"failed to read 74443fbe11e84754 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-07-19T18:44:40.025361Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a8a86752a40bcef4","remote-peer-id":"74443fbe11e84754"}
	{"level":"warn","ts":"2024-07-19T18:44:40.025653Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"a8a86752a40bcef4","remote-peer-id":"74443fbe11e84754","error":"context canceled"}
	{"level":"info","ts":"2024-07-19T18:44:40.025698Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a8a86752a40bcef4","remote-peer-id":"74443fbe11e84754"}
	{"level":"info","ts":"2024-07-19T18:44:40.025725Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"74443fbe11e84754"}
	{"level":"info","ts":"2024-07-19T18:44:40.025902Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"a8a86752a40bcef4","removed-remote-peer-id":"74443fbe11e84754"}
	{"level":"info","ts":"2024-07-19T18:44:40.02637Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"a8a86752a40bcef4","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"74443fbe11e84754"}
	{"level":"warn","ts":"2024-07-19T18:44:40.047209Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"a8a86752a40bcef4","remote-peer-id-stream-handler":"a8a86752a40bcef4","remote-peer-id-from":"74443fbe11e84754"}
	{"level":"warn","ts":"2024-07-19T18:44:40.048629Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"a8a86752a40bcef4","remote-peer-id-stream-handler":"a8a86752a40bcef4","remote-peer-id-from":"74443fbe11e84754"}
	
	
	==> kernel <==
	 18:47:14 up 16 min,  0 users,  load average: 0.42, 0.41, 0.28
	Linux ha-269108 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [00a3dee6a93924220b2eface2a95a672b43a05ce683a5fa3e15d4a6e0f88a921] <==
	I0719 18:46:27.396428       1 main.go:322] Node ha-269108-m04 has CIDR [10.244.3.0/24] 
	I0719 18:46:37.403524       1 main.go:295] Handling node with IPs: map[192.168.39.163:{}]
	I0719 18:46:37.403591       1 main.go:299] handling current node
	I0719 18:46:37.403614       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0719 18:46:37.403620       1 main.go:322] Node ha-269108-m02 has CIDR [10.244.1.0/24] 
	I0719 18:46:37.403788       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0719 18:46:37.403811       1 main.go:322] Node ha-269108-m04 has CIDR [10.244.3.0/24] 
	I0719 18:46:47.402231       1 main.go:295] Handling node with IPs: map[192.168.39.163:{}]
	I0719 18:46:47.402370       1 main.go:299] handling current node
	I0719 18:46:47.402408       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0719 18:46:47.402429       1 main.go:322] Node ha-269108-m02 has CIDR [10.244.1.0/24] 
	I0719 18:46:47.402580       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0719 18:46:47.402603       1 main.go:322] Node ha-269108-m04 has CIDR [10.244.3.0/24] 
	I0719 18:46:57.395311       1 main.go:295] Handling node with IPs: map[192.168.39.163:{}]
	I0719 18:46:57.395366       1 main.go:299] handling current node
	I0719 18:46:57.395394       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0719 18:46:57.395400       1 main.go:322] Node ha-269108-m02 has CIDR [10.244.1.0/24] 
	I0719 18:46:57.395589       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0719 18:46:57.395610       1 main.go:322] Node ha-269108-m04 has CIDR [10.244.3.0/24] 
	I0719 18:47:07.403329       1 main.go:295] Handling node with IPs: map[192.168.39.163:{}]
	I0719 18:47:07.403481       1 main.go:299] handling current node
	I0719 18:47:07.403518       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0719 18:47:07.403538       1 main.go:322] Node ha-269108-m02 has CIDR [10.244.1.0/24] 
	I0719 18:47:07.403714       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0719 18:47:07.403744       1 main.go:322] Node ha-269108-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [1b0808457c5c75d11706b6ffd7814fd4e46226d94c503fb77c9aa9c3739be671] <==
	I0719 18:40:12.391414       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0719 18:40:12.391462       1 main.go:322] Node ha-269108-m04 has CIDR [10.244.3.0/24] 
	I0719 18:40:12.391607       1 main.go:295] Handling node with IPs: map[192.168.39.163:{}]
	I0719 18:40:12.391627       1 main.go:299] handling current node
	I0719 18:40:12.391638       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0719 18:40:12.391643       1 main.go:322] Node ha-269108-m02 has CIDR [10.244.1.0/24] 
	I0719 18:40:12.391708       1 main.go:295] Handling node with IPs: map[192.168.39.111:{}]
	I0719 18:40:12.391725       1 main.go:322] Node ha-269108-m03 has CIDR [10.244.2.0/24] 
	I0719 18:40:22.387212       1 main.go:295] Handling node with IPs: map[192.168.39.163:{}]
	I0719 18:40:22.387307       1 main.go:299] handling current node
	I0719 18:40:22.387342       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0719 18:40:22.387348       1 main.go:322] Node ha-269108-m02 has CIDR [10.244.1.0/24] 
	I0719 18:40:22.387570       1 main.go:295] Handling node with IPs: map[192.168.39.111:{}]
	I0719 18:40:22.387592       1 main.go:322] Node ha-269108-m03 has CIDR [10.244.2.0/24] 
	I0719 18:40:22.387641       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0719 18:40:22.387658       1 main.go:322] Node ha-269108-m04 has CIDR [10.244.3.0/24] 
	I0719 18:40:32.385780       1 main.go:295] Handling node with IPs: map[192.168.39.87:{}]
	I0719 18:40:32.385858       1 main.go:322] Node ha-269108-m02 has CIDR [10.244.1.0/24] 
	I0719 18:40:32.386103       1 main.go:295] Handling node with IPs: map[192.168.39.111:{}]
	I0719 18:40:32.386487       1 main.go:322] Node ha-269108-m03 has CIDR [10.244.2.0/24] 
	I0719 18:40:32.386640       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0719 18:40:32.386683       1 main.go:322] Node ha-269108-m04 has CIDR [10.244.3.0/24] 
	I0719 18:40:32.386779       1 main.go:295] Handling node with IPs: map[192.168.39.163:{}]
	I0719 18:40:32.386800       1 main.go:299] handling current node
	E0719 18:40:40.209465       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	
	
	==> kube-apiserver [34be3054e3f20da8fc4dd2a1b657f03a54ed610f94ec57efc2a5dfbdf2cde39a] <==
	I0719 18:43:01.010623       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0719 18:43:01.010695       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0719 18:43:01.064743       1 shared_informer.go:320] Caches are synced for configmaps
	I0719 18:43:01.064879       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0719 18:43:01.065946       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0719 18:43:01.067232       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0719 18:43:01.068036       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0719 18:43:01.068124       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	W0719 18:43:01.076376       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.111 192.168.39.87]
	I0719 18:43:01.085215       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0719 18:43:01.113898       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0719 18:43:01.113986       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0719 18:43:01.114044       1 aggregator.go:165] initial CRD sync complete...
	I0719 18:43:01.114067       1 autoregister_controller.go:141] Starting autoregister controller
	I0719 18:43:01.114074       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0719 18:43:01.114079       1 cache.go:39] Caches are synced for autoregister controller
	I0719 18:43:01.114326       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0719 18:43:01.114357       1 policy_source.go:224] refreshing policies
	I0719 18:43:01.151893       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0719 18:43:01.179297       1 controller.go:615] quota admission added evaluator for: endpoints
	I0719 18:43:01.187112       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0719 18:43:01.190545       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0719 18:43:01.973712       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0719 18:43:02.310998       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.111 192.168.39.163 192.168.39.87]
	W0719 18:43:12.311121       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.163 192.168.39.87]
	
	
	==> kube-apiserver [76c0f3100700323b35ca91f87077c5ef59192d9768b24ef7730bbb356439b23b] <==
	I0719 18:42:16.603452       1 options.go:221] external host was not specified, using 192.168.39.163
	I0719 18:42:16.630551       1 server.go:148] Version: v1.30.3
	I0719 18:42:16.630626       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 18:42:17.726457       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0719 18:42:17.739236       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0719 18:42:17.743710       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0719 18:42:17.745545       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0719 18:42:17.745822       1 instance.go:299] Using reconciler: lease
	W0719 18:42:37.724257       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0719 18:42:37.724441       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0719 18:42:37.747080       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [81fb866ed689e0dc2eb5f19c1c566cdd0b72053376dc0145a4e981fbd079eb3c] <==
	I0719 18:42:17.414805       1 serving.go:380] Generated self-signed cert in-memory
	I0719 18:42:18.017645       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0719 18:42:18.017729       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 18:42:18.021039       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0719 18:42:18.021768       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0719 18:42:18.022049       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0719 18:42:18.022112       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0719 18:42:38.752550       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.163:8443/healthz\": dial tcp 192.168.39.163:8443: connect: connection refused"
	
	
	==> kube-controller-manager [a0f5d4e1754ebac6b06f74389d4db5e5220834e10c045e765af7c6ac0c93bba1] <==
	E0719 18:45:34.034625       1 gc_controller.go:153] "Failed to get node" err="node \"ha-269108-m03\" not found" logger="pod-garbage-collector-controller" node="ha-269108-m03"
	I0719 18:45:34.047786       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-269108-m03"
	I0719 18:45:34.077022       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-269108-m03"
	I0719 18:45:34.077254       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-269108-m03"
	I0719 18:45:34.106786       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-269108-m03"
	I0719 18:45:34.106885       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-269108-m03"
	I0719 18:45:34.142431       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-269108-m03"
	I0719 18:45:34.142517       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-269108-m03"
	I0719 18:45:34.167106       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-269108-m03"
	I0719 18:45:34.167241       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-bmcvm"
	I0719 18:45:34.197996       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-bmcvm"
	I0719 18:45:34.198112       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-9zpp2"
	I0719 18:45:34.233689       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-9zpp2"
	I0719 18:45:34.233773       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-269108-m03"
	I0719 18:45:34.262033       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-269108-m03"
	I0719 18:45:43.308777       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-dzl6d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-dzl6d\": the object has been modified; please apply your changes to the latest version and try again"
	I0719 18:45:43.309219       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"174fe22e-8e69-4270-be8d-360e58f87fc9", APIVersion:"v1", ResourceVersion:"241", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-dzl6d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-dzl6d": the object has been modified; please apply your changes to the latest version and try again
	I0719 18:45:43.311677       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.757834ms"
	I0719 18:45:43.312866       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="193.31µs"
	I0719 18:45:43.348652       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.863489ms"
	I0719 18:45:43.348743       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-dzl6d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-dzl6d\": the object has been modified; please apply your changes to the latest version and try again"
	I0719 18:45:43.349103       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"174fe22e-8e69-4270-be8d-360e58f87fc9", APIVersion:"v1", ResourceVersion:"241", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-dzl6d EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-dzl6d": the object has been modified; please apply your changes to the latest version and try again
	I0719 18:45:43.349966       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="41.873µs"
	I0719 18:45:43.514721       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.605908ms"
	I0719 18:45:43.514823       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.715µs"
	
	
	==> kube-proxy [01c9be2fa966f212fb25f9c4cabcd726723a1406f5951bbf8b0fc3b0ceb26df7] <==
	E0719 18:42:43.475842       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-269108\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0719 18:43:01.908744       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-269108\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0719 18:43:01.909067       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0719 18:43:01.944625       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 18:43:01.944696       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 18:43:01.944713       1 server_linux.go:165] "Using iptables Proxier"
	I0719 18:43:01.947060       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 18:43:01.947490       1 server.go:872] "Version info" version="v1.30.3"
	I0719 18:43:01.947548       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 18:43:01.949139       1 config.go:192] "Starting service config controller"
	I0719 18:43:01.949257       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 18:43:01.949314       1 config.go:101] "Starting endpoint slice config controller"
	I0719 18:43:01.949331       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 18:43:01.950013       1 config.go:319] "Starting node config controller"
	I0719 18:43:01.950050       1 shared_informer.go:313] Waiting for caches to sync for node config
	E0719 18:43:04.979704       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0719 18:43:04.979995       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 18:43:04.980251       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 18:43:04.979941       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-269108&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 18:43:04.980318       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-269108&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 18:43:04.980290       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 18:43:04.980431       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0719 18:43:06.249864       1 shared_informer.go:320] Caches are synced for service config
	I0719 18:43:06.449812       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 18:43:06.550347       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [61f7a66979140fe738b08e527164c2369f41fd064fa704fc9fee868074b4a452] <==
	E0719 18:39:39.157536       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-269108&resourceVersion=1795": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 18:39:42.227926       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1763": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 18:39:42.228233       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1763": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 18:39:42.228427       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-269108&resourceVersion=1795": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 18:39:42.228525       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-269108&resourceVersion=1795": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 18:39:42.229334       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1865": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 18:39:42.229567       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1865": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 18:39:48.374966       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1763": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 18:39:48.375355       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1763": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 18:39:48.375739       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-269108&resourceVersion=1795": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 18:39:48.375856       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-269108&resourceVersion=1795": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 18:39:48.375764       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1865": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 18:39:48.376052       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1865": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 18:39:57.589448       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1763": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 18:39:57.589535       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1763": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 18:40:00.661176       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1865": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 18:40:00.661234       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1865": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 18:40:00.661313       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-269108&resourceVersion=1795": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 18:40:00.661345       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-269108&resourceVersion=1795": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 18:40:16.020039       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1865": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 18:40:16.020140       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1865": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 18:40:22.164015       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1763": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 18:40:22.164078       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1763": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 18:40:25.236249       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-269108&resourceVersion=1795": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 18:40:25.236365       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-269108&resourceVersion=1795": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [00761e9db17ecfad96376a93ad36118da3b611230214d3f6b730aed8037c9f74] <==
	E0719 18:40:38.987640       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 18:40:39.413817       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0719 18:40:39.414018       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0719 18:40:39.745014       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0719 18:40:39.745059       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0719 18:40:40.493569       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 18:40:40.493610       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0719 18:40:40.668335       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0719 18:40:40.668389       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0719 18:40:40.731543       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0719 18:40:40.731638       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0719 18:40:40.788134       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 18:40:40.788217       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0719 18:40:40.990528       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 18:40:40.990582       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0719 18:40:41.023983       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 18:40:41.024114       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0719 18:40:41.317914       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 18:40:41.317944       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 18:40:41.649122       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 18:40:41.649238       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0719 18:40:41.764072       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0719 18:40:41.764543       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0719 18:40:41.764972       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0719 18:40:41.770244       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [02e7b50680c1dac42f2c682f7ea72ed5143e51f1ecff42c1b0cb04290d516627] <==
	W0719 18:42:55.837425       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.163:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	E0719 18:42:55.837520       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.163:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	W0719 18:42:56.122829       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.163:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	E0719 18:42:56.122968       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.163:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	W0719 18:42:56.252314       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.163:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	E0719 18:42:56.252469       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.163:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	W0719 18:42:56.292768       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.163:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	E0719 18:42:56.292828       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.163:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	W0719 18:42:56.369807       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.163:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	E0719 18:42:56.369858       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.163:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	W0719 18:42:56.437502       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.163:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	E0719 18:42:56.437555       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.163:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	W0719 18:42:56.632955       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.163:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	E0719 18:42:56.633071       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.163:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	W0719 18:42:57.506903       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.163:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	E0719 18:42:57.507030       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.163:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	W0719 18:42:59.315756       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.163:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	E0719 18:42:59.315872       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.163:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	W0719 18:42:59.377818       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.163:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	E0719 18:42:59.377883       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.163:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.163:8443: connect: connection refused
	I0719 18:43:13.559853       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0719 18:44:36.707617       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-r678j\": pod busybox-fc5497c4f-r678j is already assigned to node \"ha-269108-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-r678j" node="ha-269108-m04"
	E0719 18:44:36.708233       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 2ef7a124-efad-4f54-ba2a-5d34f30b3ec1(default/busybox-fc5497c4f-r678j) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-r678j"
	E0719 18:44:36.708319       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-r678j\": pod busybox-fc5497c4f-r678j is already assigned to node \"ha-269108-m04\"" pod="default/busybox-fc5497c4f-r678j"
	I0719 18:44:36.708416       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-r678j" node="ha-269108-m04"
	
	
	==> kubelet <==
	Jul 19 18:45:37 ha-269108 kubelet[1371]: E0719 18:45:37.759711    1371 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-269108?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Jul 19 18:45:41 ha-269108 kubelet[1371]: W0719 18:45:41.001717    1371 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 19 18:45:41 ha-269108 kubelet[1371]: E0719 18:45:41.001833    1371 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-269108?timeout=10s\": http2: client connection lost"
	Jul 19 18:45:41 ha-269108 kubelet[1371]: I0719 18:45:41.002414    1371 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
	Jul 19 18:45:41 ha-269108 kubelet[1371]: W0719 18:45:41.001730    1371 reflector.go:470] object-"default"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 19 18:45:41 ha-269108 kubelet[1371]: E0719 18:45:41.001888    1371 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-269108\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-269108?timeout=10s\": http2: client connection lost"
	Jul 19 18:45:41 ha-269108 kubelet[1371]: E0719 18:45:41.006073    1371 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Jul 19 18:45:41 ha-269108 kubelet[1371]: W0719 18:45:41.001756    1371 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 19 18:45:41 ha-269108 kubelet[1371]: W0719 18:45:41.001953    1371 reflector.go:470] object-"kube-system"/"kube-proxy": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 19 18:45:41 ha-269108 kubelet[1371]: W0719 18:45:41.001971    1371 reflector.go:470] object-"kube-system"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 19 18:45:41 ha-269108 kubelet[1371]: W0719 18:45:41.001986    1371 reflector.go:470] object-"kube-system"/"coredns": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 19 18:45:41 ha-269108 kubelet[1371]: W0719 18:45:41.002000    1371 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 19 18:45:41 ha-269108 kubelet[1371]: W0719 18:45:41.002029    1371 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 19 18:45:41 ha-269108 kubelet[1371]: W0719 18:45:41.002052    1371 reflector.go:470] pkg/kubelet/config/apiserver.go:66: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 19 18:45:42 ha-269108 kubelet[1371]: I0719 18:45:42.451686    1371 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-269108" podStartSLOduration=126.451651631 podStartE2EDuration="2m6.451651631s" podCreationTimestamp="2024-07-19 18:43:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-19 18:43:56.123586931 +0000 UTC m=+763.029600998" watchObservedRunningTime="2024-07-19 18:45:42.451651631 +0000 UTC m=+869.357665697"
	Jul 19 18:46:13 ha-269108 kubelet[1371]: E0719 18:46:13.274122    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 18:46:13 ha-269108 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 18:46:13 ha-269108 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 18:46:13 ha-269108 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 18:46:13 ha-269108 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 18:47:13 ha-269108 kubelet[1371]: E0719 18:47:13.274304    1371 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 18:47:13 ha-269108 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 18:47:13 ha-269108 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 18:47:13 ha-269108 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 18:47:13 ha-269108 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 18:47:13.080348   41795 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19307-14841/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-269108 -n ha-269108
helpers_test.go:261: (dbg) Run:  kubectl --context ha-269108 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.61s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (323.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-269079
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-269079
E0719 19:02:32.426822   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-269079: exit status 82 (2m1.765170607s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-269079-m03"  ...
	* Stopping node "multinode-269079-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-269079" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-269079 --wait=true -v=8 --alsologtostderr
E0719 19:04:43.525550   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/functional-788436/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-269079 --wait=true -v=8 --alsologtostderr: (3m19.901449237s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-269079
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-269079 -n multinode-269079
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-269079 logs -n 25: (1.377480149s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-269079 ssh -n                                                                 | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | multinode-269079-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-269079 cp multinode-269079-m02:/home/docker/cp-test.txt                       | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1789310348/001/cp-test_multinode-269079-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-269079 ssh -n                                                                 | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | multinode-269079-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-269079 cp multinode-269079-m02:/home/docker/cp-test.txt                       | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | multinode-269079:/home/docker/cp-test_multinode-269079-m02_multinode-269079.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-269079 ssh -n                                                                 | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | multinode-269079-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-269079 ssh -n multinode-269079 sudo cat                                       | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | /home/docker/cp-test_multinode-269079-m02_multinode-269079.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-269079 cp multinode-269079-m02:/home/docker/cp-test.txt                       | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | multinode-269079-m03:/home/docker/cp-test_multinode-269079-m02_multinode-269079-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-269079 ssh -n                                                                 | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | multinode-269079-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-269079 ssh -n multinode-269079-m03 sudo cat                                   | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | /home/docker/cp-test_multinode-269079-m02_multinode-269079-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-269079 cp testdata/cp-test.txt                                                | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | multinode-269079-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-269079 ssh -n                                                                 | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | multinode-269079-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-269079 cp multinode-269079-m03:/home/docker/cp-test.txt                       | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1789310348/001/cp-test_multinode-269079-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-269079 ssh -n                                                                 | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | multinode-269079-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-269079 cp multinode-269079-m03:/home/docker/cp-test.txt                       | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | multinode-269079:/home/docker/cp-test_multinode-269079-m03_multinode-269079.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-269079 ssh -n                                                                 | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | multinode-269079-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-269079 ssh -n multinode-269079 sudo cat                                       | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | /home/docker/cp-test_multinode-269079-m03_multinode-269079.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-269079 cp multinode-269079-m03:/home/docker/cp-test.txt                       | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | multinode-269079-m02:/home/docker/cp-test_multinode-269079-m03_multinode-269079-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-269079 ssh -n                                                                 | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | multinode-269079-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-269079 ssh -n multinode-269079-m02 sudo cat                                   | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | /home/docker/cp-test_multinode-269079-m03_multinode-269079-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-269079 node stop m03                                                          | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	| node    | multinode-269079 node start                                                             | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-269079                                                                | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC |                     |
	| stop    | -p multinode-269079                                                                     | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC |                     |
	| start   | -p multinode-269079                                                                     | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:04 UTC | 19 Jul 24 19:07 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-269079                                                                | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:07 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 19:04:01
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 19:04:01.037115   51196 out.go:291] Setting OutFile to fd 1 ...
	I0719 19:04:01.037237   51196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:04:01.037247   51196 out.go:304] Setting ErrFile to fd 2...
	I0719 19:04:01.037254   51196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:04:01.037438   51196 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 19:04:01.037988   51196 out.go:298] Setting JSON to false
	I0719 19:04:01.038897   51196 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6384,"bootTime":1721409457,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 19:04:01.038948   51196 start.go:139] virtualization: kvm guest
	I0719 19:04:01.041278   51196 out.go:177] * [multinode-269079] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 19:04:01.042682   51196 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 19:04:01.042688   51196 notify.go:220] Checking for updates...
	I0719 19:04:01.044038   51196 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 19:04:01.045403   51196 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:04:01.046689   51196 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 19:04:01.047784   51196 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 19:04:01.048928   51196 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 19:04:01.050403   51196 config.go:182] Loaded profile config "multinode-269079": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:04:01.050498   51196 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 19:04:01.050890   51196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 19:04:01.050943   51196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:04:01.066545   51196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38027
	I0719 19:04:01.066928   51196 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:04:01.067589   51196 main.go:141] libmachine: Using API Version  1
	I0719 19:04:01.067614   51196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:04:01.068025   51196 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:04:01.068294   51196 main.go:141] libmachine: (multinode-269079) Calling .DriverName
	I0719 19:04:01.103449   51196 out.go:177] * Using the kvm2 driver based on existing profile
	I0719 19:04:01.104785   51196 start.go:297] selected driver: kvm2
	I0719 19:04:01.104815   51196 start.go:901] validating driver "kvm2" against &{Name:multinode-269079 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-269079 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.165 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:04:01.104994   51196 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 19:04:01.105434   51196 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 19:04:01.105516   51196 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19307-14841/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 19:04:01.119801   51196 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 19:04:01.120915   51196 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 19:04:01.120991   51196 cni.go:84] Creating CNI manager for ""
	I0719 19:04:01.121007   51196 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0719 19:04:01.121081   51196 start.go:340] cluster config:
	{Name:multinode-269079 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-269079 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.165 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:04:01.121229   51196 iso.go:125] acquiring lock: {Name:mka126a3710ab8a115eb2a3299b735524430e5f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 19:04:01.124102   51196 out.go:177] * Starting "multinode-269079" primary control-plane node in "multinode-269079" cluster
	I0719 19:04:01.125481   51196 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 19:04:01.125511   51196 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 19:04:01.125517   51196 cache.go:56] Caching tarball of preloaded images
	I0719 19:04:01.125581   51196 preload.go:172] Found /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 19:04:01.125592   51196 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 19:04:01.125698   51196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/multinode-269079/config.json ...
	I0719 19:04:01.125872   51196 start.go:360] acquireMachinesLock for multinode-269079: {Name:mk45aeee801587237bb965fa260501e9fac6f233 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 19:04:01.125927   51196 start.go:364] duration metric: took 38.861µs to acquireMachinesLock for "multinode-269079"
	I0719 19:04:01.125939   51196 start.go:96] Skipping create...Using existing machine configuration
	I0719 19:04:01.125946   51196 fix.go:54] fixHost starting: 
	I0719 19:04:01.126181   51196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 19:04:01.126213   51196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:04:01.140179   51196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45007
	I0719 19:04:01.140583   51196 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:04:01.141001   51196 main.go:141] libmachine: Using API Version  1
	I0719 19:04:01.141021   51196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:04:01.141352   51196 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:04:01.141552   51196 main.go:141] libmachine: (multinode-269079) Calling .DriverName
	I0719 19:04:01.141728   51196 main.go:141] libmachine: (multinode-269079) Calling .GetState
	I0719 19:04:01.143505   51196 fix.go:112] recreateIfNeeded on multinode-269079: state=Running err=<nil>
	W0719 19:04:01.143540   51196 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 19:04:01.145532   51196 out.go:177] * Updating the running kvm2 "multinode-269079" VM ...
	I0719 19:04:01.146806   51196 machine.go:94] provisionDockerMachine start ...
	I0719 19:04:01.146829   51196 main.go:141] libmachine: (multinode-269079) Calling .DriverName
	I0719 19:04:01.147081   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHHostname
	I0719 19:04:01.149519   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:04:01.149983   51196 main.go:141] libmachine: (multinode-269079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:cc:9c", ip: ""} in network mk-multinode-269079: {Iface:virbr1 ExpiryTime:2024-07-19 19:58:29 +0000 UTC Type:0 Mac:52:54:00:e2:cc:9c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:multinode-269079 Clientid:01:52:54:00:e2:cc:9c}
	I0719 19:04:01.150030   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined IP address 192.168.39.154 and MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:04:01.150198   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHPort
	I0719 19:04:01.150388   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHKeyPath
	I0719 19:04:01.150575   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHKeyPath
	I0719 19:04:01.150727   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHUsername
	I0719 19:04:01.150874   51196 main.go:141] libmachine: Using SSH client type: native
	I0719 19:04:01.151107   51196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0719 19:04:01.151120   51196 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 19:04:01.264601   51196 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-269079
	
	I0719 19:04:01.264634   51196 main.go:141] libmachine: (multinode-269079) Calling .GetMachineName
	I0719 19:04:01.264902   51196 buildroot.go:166] provisioning hostname "multinode-269079"
	I0719 19:04:01.264929   51196 main.go:141] libmachine: (multinode-269079) Calling .GetMachineName
	I0719 19:04:01.265116   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHHostname
	I0719 19:04:01.267800   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:04:01.268221   51196 main.go:141] libmachine: (multinode-269079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:cc:9c", ip: ""} in network mk-multinode-269079: {Iface:virbr1 ExpiryTime:2024-07-19 19:58:29 +0000 UTC Type:0 Mac:52:54:00:e2:cc:9c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:multinode-269079 Clientid:01:52:54:00:e2:cc:9c}
	I0719 19:04:01.268249   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined IP address 192.168.39.154 and MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:04:01.268399   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHPort
	I0719 19:04:01.268580   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHKeyPath
	I0719 19:04:01.268754   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHKeyPath
	I0719 19:04:01.268875   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHUsername
	I0719 19:04:01.269064   51196 main.go:141] libmachine: Using SSH client type: native
	I0719 19:04:01.269264   51196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0719 19:04:01.269278   51196 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-269079 && echo "multinode-269079" | sudo tee /etc/hostname
	I0719 19:04:01.394010   51196 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-269079
	
	I0719 19:04:01.394049   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHHostname
	I0719 19:04:01.396908   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:04:01.397253   51196 main.go:141] libmachine: (multinode-269079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:cc:9c", ip: ""} in network mk-multinode-269079: {Iface:virbr1 ExpiryTime:2024-07-19 19:58:29 +0000 UTC Type:0 Mac:52:54:00:e2:cc:9c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:multinode-269079 Clientid:01:52:54:00:e2:cc:9c}
	I0719 19:04:01.397275   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined IP address 192.168.39.154 and MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:04:01.397467   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHPort
	I0719 19:04:01.397670   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHKeyPath
	I0719 19:04:01.397831   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHKeyPath
	I0719 19:04:01.397984   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHUsername
	I0719 19:04:01.398152   51196 main.go:141] libmachine: Using SSH client type: native
	I0719 19:04:01.398351   51196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0719 19:04:01.398368   51196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-269079' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-269079/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-269079' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 19:04:01.508292   51196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:04:01.508328   51196 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 19:04:01.508373   51196 buildroot.go:174] setting up certificates
	I0719 19:04:01.508387   51196 provision.go:84] configureAuth start
	I0719 19:04:01.508406   51196 main.go:141] libmachine: (multinode-269079) Calling .GetMachineName
	I0719 19:04:01.508706   51196 main.go:141] libmachine: (multinode-269079) Calling .GetIP
	I0719 19:04:01.511262   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:04:01.511692   51196 main.go:141] libmachine: (multinode-269079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:cc:9c", ip: ""} in network mk-multinode-269079: {Iface:virbr1 ExpiryTime:2024-07-19 19:58:29 +0000 UTC Type:0 Mac:52:54:00:e2:cc:9c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:multinode-269079 Clientid:01:52:54:00:e2:cc:9c}
	I0719 19:04:01.511714   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined IP address 192.168.39.154 and MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:04:01.511892   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHHostname
	I0719 19:04:01.513933   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:04:01.514314   51196 main.go:141] libmachine: (multinode-269079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:cc:9c", ip: ""} in network mk-multinode-269079: {Iface:virbr1 ExpiryTime:2024-07-19 19:58:29 +0000 UTC Type:0 Mac:52:54:00:e2:cc:9c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:multinode-269079 Clientid:01:52:54:00:e2:cc:9c}
	I0719 19:04:01.514359   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined IP address 192.168.39.154 and MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:04:01.514437   51196 provision.go:143] copyHostCerts
	I0719 19:04:01.514483   51196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 19:04:01.514519   51196 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 19:04:01.514526   51196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 19:04:01.514597   51196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 19:04:01.514670   51196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 19:04:01.514687   51196 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 19:04:01.514691   51196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 19:04:01.514716   51196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 19:04:01.514754   51196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 19:04:01.514769   51196 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 19:04:01.514776   51196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 19:04:01.514796   51196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 19:04:01.514838   51196 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.multinode-269079 san=[127.0.0.1 192.168.39.154 localhost minikube multinode-269079]
	I0719 19:04:01.695598   51196 provision.go:177] copyRemoteCerts
	I0719 19:04:01.695658   51196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 19:04:01.695689   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHHostname
	I0719 19:04:01.698215   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:04:01.698625   51196 main.go:141] libmachine: (multinode-269079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:cc:9c", ip: ""} in network mk-multinode-269079: {Iface:virbr1 ExpiryTime:2024-07-19 19:58:29 +0000 UTC Type:0 Mac:52:54:00:e2:cc:9c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:multinode-269079 Clientid:01:52:54:00:e2:cc:9c}
	I0719 19:04:01.698655   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined IP address 192.168.39.154 and MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:04:01.698825   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHPort
	I0719 19:04:01.699000   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHKeyPath
	I0719 19:04:01.699120   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHUsername
	I0719 19:04:01.699268   51196 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/multinode-269079/id_rsa Username:docker}
	I0719 19:04:01.786134   51196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0719 19:04:01.786348   51196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 19:04:01.810290   51196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0719 19:04:01.810396   51196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0719 19:04:01.832627   51196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0719 19:04:01.832699   51196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 19:04:01.855512   51196 provision.go:87] duration metric: took 347.10822ms to configureAuth
	I0719 19:04:01.855537   51196 buildroot.go:189] setting minikube options for container-runtime
	I0719 19:04:01.855781   51196 config.go:182] Loaded profile config "multinode-269079": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:04:01.855895   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHHostname
	I0719 19:04:01.858861   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:04:01.859307   51196 main.go:141] libmachine: (multinode-269079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:cc:9c", ip: ""} in network mk-multinode-269079: {Iface:virbr1 ExpiryTime:2024-07-19 19:58:29 +0000 UTC Type:0 Mac:52:54:00:e2:cc:9c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:multinode-269079 Clientid:01:52:54:00:e2:cc:9c}
	I0719 19:04:01.859333   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined IP address 192.168.39.154 and MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:04:01.859545   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHPort
	I0719 19:04:01.859731   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHKeyPath
	I0719 19:04:01.859903   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHKeyPath
	I0719 19:04:01.860071   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHUsername
	I0719 19:04:01.860236   51196 main.go:141] libmachine: Using SSH client type: native
	I0719 19:04:01.860406   51196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0719 19:04:01.860420   51196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 19:05:32.568014   51196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 19:05:32.568072   51196 machine.go:97] duration metric: took 1m31.421249625s to provisionDockerMachine
	I0719 19:05:32.568099   51196 start.go:293] postStartSetup for "multinode-269079" (driver="kvm2")
	I0719 19:05:32.568127   51196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 19:05:32.568164   51196 main.go:141] libmachine: (multinode-269079) Calling .DriverName
	I0719 19:05:32.568570   51196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 19:05:32.568598   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHHostname
	I0719 19:05:32.571573   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:05:32.571974   51196 main.go:141] libmachine: (multinode-269079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:cc:9c", ip: ""} in network mk-multinode-269079: {Iface:virbr1 ExpiryTime:2024-07-19 19:58:29 +0000 UTC Type:0 Mac:52:54:00:e2:cc:9c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:multinode-269079 Clientid:01:52:54:00:e2:cc:9c}
	I0719 19:05:32.571996   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined IP address 192.168.39.154 and MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:05:32.572271   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHPort
	I0719 19:05:32.572484   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHKeyPath
	I0719 19:05:32.572648   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHUsername
	I0719 19:05:32.572810   51196 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/multinode-269079/id_rsa Username:docker}
	I0719 19:05:32.658486   51196 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 19:05:32.662372   51196 command_runner.go:130] > NAME=Buildroot
	I0719 19:05:32.662389   51196 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0719 19:05:32.662395   51196 command_runner.go:130] > ID=buildroot
	I0719 19:05:32.662403   51196 command_runner.go:130] > VERSION_ID=2023.02.9
	I0719 19:05:32.662410   51196 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0719 19:05:32.662509   51196 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 19:05:32.662545   51196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 19:05:32.662638   51196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 19:05:32.662718   51196 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 19:05:32.662727   51196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> /etc/ssl/certs/220282.pem
	I0719 19:05:32.662804   51196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 19:05:32.672061   51196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:05:32.693504   51196 start.go:296] duration metric: took 125.387003ms for postStartSetup
	I0719 19:05:32.693578   51196 fix.go:56] duration metric: took 1m31.567616605s for fixHost
	I0719 19:05:32.693605   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHHostname
	I0719 19:05:32.696353   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:05:32.696822   51196 main.go:141] libmachine: (multinode-269079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:cc:9c", ip: ""} in network mk-multinode-269079: {Iface:virbr1 ExpiryTime:2024-07-19 19:58:29 +0000 UTC Type:0 Mac:52:54:00:e2:cc:9c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:multinode-269079 Clientid:01:52:54:00:e2:cc:9c}
	I0719 19:05:32.696853   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined IP address 192.168.39.154 and MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:05:32.697040   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHPort
	I0719 19:05:32.697232   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHKeyPath
	I0719 19:05:32.697385   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHKeyPath
	I0719 19:05:32.697546   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHUsername
	I0719 19:05:32.697735   51196 main.go:141] libmachine: Using SSH client type: native
	I0719 19:05:32.697940   51196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0719 19:05:32.697954   51196 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 19:05:32.808126   51196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721415932.778404010
	
	I0719 19:05:32.808154   51196 fix.go:216] guest clock: 1721415932.778404010
	I0719 19:05:32.808165   51196 fix.go:229] Guest: 2024-07-19 19:05:32.77840401 +0000 UTC Remote: 2024-07-19 19:05:32.693589706 +0000 UTC m=+91.690721572 (delta=84.814304ms)
	I0719 19:05:32.808204   51196 fix.go:200] guest clock delta is within tolerance: 84.814304ms
	I0719 19:05:32.808216   51196 start.go:83] releasing machines lock for "multinode-269079", held for 1m31.682280593s
	I0719 19:05:32.808244   51196 main.go:141] libmachine: (multinode-269079) Calling .DriverName
	I0719 19:05:32.808484   51196 main.go:141] libmachine: (multinode-269079) Calling .GetIP
	I0719 19:05:32.810925   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:05:32.811281   51196 main.go:141] libmachine: (multinode-269079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:cc:9c", ip: ""} in network mk-multinode-269079: {Iface:virbr1 ExpiryTime:2024-07-19 19:58:29 +0000 UTC Type:0 Mac:52:54:00:e2:cc:9c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:multinode-269079 Clientid:01:52:54:00:e2:cc:9c}
	I0719 19:05:32.811305   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined IP address 192.168.39.154 and MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:05:32.811440   51196 main.go:141] libmachine: (multinode-269079) Calling .DriverName
	I0719 19:05:32.811894   51196 main.go:141] libmachine: (multinode-269079) Calling .DriverName
	I0719 19:05:32.812072   51196 main.go:141] libmachine: (multinode-269079) Calling .DriverName
	I0719 19:05:32.812162   51196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 19:05:32.812215   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHHostname
	I0719 19:05:32.812276   51196 ssh_runner.go:195] Run: cat /version.json
	I0719 19:05:32.812298   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHHostname
	I0719 19:05:32.814733   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:05:32.814993   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:05:32.815049   51196 main.go:141] libmachine: (multinode-269079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:cc:9c", ip: ""} in network mk-multinode-269079: {Iface:virbr1 ExpiryTime:2024-07-19 19:58:29 +0000 UTC Type:0 Mac:52:54:00:e2:cc:9c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:multinode-269079 Clientid:01:52:54:00:e2:cc:9c}
	I0719 19:05:32.815076   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined IP address 192.168.39.154 and MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:05:32.815194   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHPort
	I0719 19:05:32.815366   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHKeyPath
	I0719 19:05:32.815463   51196 main.go:141] libmachine: (multinode-269079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:cc:9c", ip: ""} in network mk-multinode-269079: {Iface:virbr1 ExpiryTime:2024-07-19 19:58:29 +0000 UTC Type:0 Mac:52:54:00:e2:cc:9c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:multinode-269079 Clientid:01:52:54:00:e2:cc:9c}
	I0719 19:05:32.815499   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHUsername
	I0719 19:05:32.815550   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined IP address 192.168.39.154 and MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:05:32.815790   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHPort
	I0719 19:05:32.815804   51196 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/multinode-269079/id_rsa Username:docker}
	I0719 19:05:32.815969   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHKeyPath
	I0719 19:05:32.816111   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHUsername
	I0719 19:05:32.816410   51196 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/multinode-269079/id_rsa Username:docker}
	I0719 19:05:32.922502   51196 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0719 19:05:32.923209   51196 command_runner.go:130] > {"iso_version": "v1.33.1-1721324531-19298", "kicbase_version": "v0.0.44-1721234491-19282", "minikube_version": "v1.33.1", "commit": "0e13329c5f674facda20b63833c6d01811d249dd"}
	I0719 19:05:32.923385   51196 ssh_runner.go:195] Run: systemctl --version
	I0719 19:05:32.929373   51196 command_runner.go:130] > systemd 252 (252)
	I0719 19:05:32.929417   51196 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0719 19:05:32.929476   51196 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 19:05:33.087670   51196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0719 19:05:33.093106   51196 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0719 19:05:33.093211   51196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 19:05:33.093276   51196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 19:05:33.101762   51196 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0719 19:05:33.101782   51196 start.go:495] detecting cgroup driver to use...
	I0719 19:05:33.101847   51196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 19:05:33.117412   51196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 19:05:33.130102   51196 docker.go:217] disabling cri-docker service (if available) ...
	I0719 19:05:33.130141   51196 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 19:05:33.142334   51196 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 19:05:33.154451   51196 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 19:05:33.291162   51196 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 19:05:33.435093   51196 docker.go:233] disabling docker service ...
	I0719 19:05:33.435175   51196 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 19:05:33.453492   51196 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 19:05:33.466188   51196 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 19:05:33.605482   51196 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 19:05:33.738300   51196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 19:05:33.751249   51196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 19:05:33.769874   51196 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0719 19:05:33.769948   51196 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 19:05:33.770005   51196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:05:33.779580   51196 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 19:05:33.779648   51196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:05:33.788868   51196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:05:33.799124   51196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:05:33.809610   51196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 19:05:33.820000   51196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:05:33.830339   51196 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:05:33.839930   51196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:05:33.849421   51196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 19:05:33.858055   51196 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0719 19:05:33.858109   51196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 19:05:33.866614   51196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:05:34.003103   51196 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 19:05:34.228499   51196 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 19:05:34.228572   51196 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 19:05:34.233012   51196 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0719 19:05:34.233032   51196 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0719 19:05:34.233039   51196 command_runner.go:130] > Device: 0,22	Inode: 1328        Links: 1
	I0719 19:05:34.233049   51196 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0719 19:05:34.233057   51196 command_runner.go:130] > Access: 2024-07-19 19:05:34.105026150 +0000
	I0719 19:05:34.233068   51196 command_runner.go:130] > Modify: 2024-07-19 19:05:34.105026150 +0000
	I0719 19:05:34.233079   51196 command_runner.go:130] > Change: 2024-07-19 19:05:34.105026150 +0000
	I0719 19:05:34.233085   51196 command_runner.go:130] >  Birth: -
	I0719 19:05:34.233119   51196 start.go:563] Will wait 60s for crictl version
	I0719 19:05:34.233166   51196 ssh_runner.go:195] Run: which crictl
	I0719 19:05:34.236747   51196 command_runner.go:130] > /usr/bin/crictl
	I0719 19:05:34.236807   51196 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 19:05:34.274436   51196 command_runner.go:130] > Version:  0.1.0
	I0719 19:05:34.274462   51196 command_runner.go:130] > RuntimeName:  cri-o
	I0719 19:05:34.274468   51196 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0719 19:05:34.274475   51196 command_runner.go:130] > RuntimeApiVersion:  v1
	I0719 19:05:34.274525   51196 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 19:05:34.274615   51196 ssh_runner.go:195] Run: crio --version
	I0719 19:05:34.299381   51196 command_runner.go:130] > crio version 1.29.1
	I0719 19:05:34.299408   51196 command_runner.go:130] > Version:        1.29.1
	I0719 19:05:34.299416   51196 command_runner.go:130] > GitCommit:      unknown
	I0719 19:05:34.299423   51196 command_runner.go:130] > GitCommitDate:  unknown
	I0719 19:05:34.299429   51196 command_runner.go:130] > GitTreeState:   clean
	I0719 19:05:34.299439   51196 command_runner.go:130] > BuildDate:      2024-07-18T22:57:15Z
	I0719 19:05:34.299445   51196 command_runner.go:130] > GoVersion:      go1.21.6
	I0719 19:05:34.299452   51196 command_runner.go:130] > Compiler:       gc
	I0719 19:05:34.299459   51196 command_runner.go:130] > Platform:       linux/amd64
	I0719 19:05:34.299466   51196 command_runner.go:130] > Linkmode:       dynamic
	I0719 19:05:34.299473   51196 command_runner.go:130] > BuildTags:      
	I0719 19:05:34.299483   51196 command_runner.go:130] >   containers_image_ostree_stub
	I0719 19:05:34.299489   51196 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0719 19:05:34.299496   51196 command_runner.go:130] >   btrfs_noversion
	I0719 19:05:34.299507   51196 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0719 19:05:34.299513   51196 command_runner.go:130] >   libdm_no_deferred_remove
	I0719 19:05:34.299524   51196 command_runner.go:130] >   seccomp
	I0719 19:05:34.299530   51196 command_runner.go:130] > LDFlags:          unknown
	I0719 19:05:34.299536   51196 command_runner.go:130] > SeccompEnabled:   true
	I0719 19:05:34.299543   51196 command_runner.go:130] > AppArmorEnabled:  false
	I0719 19:05:34.300683   51196 ssh_runner.go:195] Run: crio --version
	I0719 19:05:34.326060   51196 command_runner.go:130] > crio version 1.29.1
	I0719 19:05:34.326084   51196 command_runner.go:130] > Version:        1.29.1
	I0719 19:05:34.326101   51196 command_runner.go:130] > GitCommit:      unknown
	I0719 19:05:34.326105   51196 command_runner.go:130] > GitCommitDate:  unknown
	I0719 19:05:34.326109   51196 command_runner.go:130] > GitTreeState:   clean
	I0719 19:05:34.326115   51196 command_runner.go:130] > BuildDate:      2024-07-18T22:57:15Z
	I0719 19:05:34.326119   51196 command_runner.go:130] > GoVersion:      go1.21.6
	I0719 19:05:34.326123   51196 command_runner.go:130] > Compiler:       gc
	I0719 19:05:34.326127   51196 command_runner.go:130] > Platform:       linux/amd64
	I0719 19:05:34.326131   51196 command_runner.go:130] > Linkmode:       dynamic
	I0719 19:05:34.326144   51196 command_runner.go:130] > BuildTags:      
	I0719 19:05:34.326151   51196 command_runner.go:130] >   containers_image_ostree_stub
	I0719 19:05:34.326162   51196 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0719 19:05:34.326168   51196 command_runner.go:130] >   btrfs_noversion
	I0719 19:05:34.326174   51196 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0719 19:05:34.326181   51196 command_runner.go:130] >   libdm_no_deferred_remove
	I0719 19:05:34.326186   51196 command_runner.go:130] >   seccomp
	I0719 19:05:34.326196   51196 command_runner.go:130] > LDFlags:          unknown
	I0719 19:05:34.326202   51196 command_runner.go:130] > SeccompEnabled:   true
	I0719 19:05:34.326208   51196 command_runner.go:130] > AppArmorEnabled:  false
	I0719 19:05:34.328653   51196 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 19:05:34.329808   51196 main.go:141] libmachine: (multinode-269079) Calling .GetIP
	I0719 19:05:34.332208   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:05:34.332587   51196 main.go:141] libmachine: (multinode-269079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:cc:9c", ip: ""} in network mk-multinode-269079: {Iface:virbr1 ExpiryTime:2024-07-19 19:58:29 +0000 UTC Type:0 Mac:52:54:00:e2:cc:9c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:multinode-269079 Clientid:01:52:54:00:e2:cc:9c}
	I0719 19:05:34.332616   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined IP address 192.168.39.154 and MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:05:34.332910   51196 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 19:05:34.336749   51196 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0719 19:05:34.336990   51196 kubeadm.go:883] updating cluster {Name:multinode-269079 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-269079 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.165 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 19:05:34.337119   51196 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 19:05:34.337158   51196 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:05:34.377182   51196 command_runner.go:130] > {
	I0719 19:05:34.377200   51196 command_runner.go:130] >   "images": [
	I0719 19:05:34.377205   51196 command_runner.go:130] >     {
	I0719 19:05:34.377212   51196 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0719 19:05:34.377217   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.377224   51196 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0719 19:05:34.377228   51196 command_runner.go:130] >       ],
	I0719 19:05:34.377232   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.377240   51196 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0719 19:05:34.377248   51196 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0719 19:05:34.377252   51196 command_runner.go:130] >       ],
	I0719 19:05:34.377256   51196 command_runner.go:130] >       "size": "87165492",
	I0719 19:05:34.377260   51196 command_runner.go:130] >       "uid": null,
	I0719 19:05:34.377267   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.377272   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.377278   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.377282   51196 command_runner.go:130] >     },
	I0719 19:05:34.377285   51196 command_runner.go:130] >     {
	I0719 19:05:34.377292   51196 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0719 19:05:34.377296   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.377302   51196 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0719 19:05:34.377305   51196 command_runner.go:130] >       ],
	I0719 19:05:34.377310   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.377316   51196 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0719 19:05:34.377324   51196 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0719 19:05:34.377329   51196 command_runner.go:130] >       ],
	I0719 19:05:34.377336   51196 command_runner.go:130] >       "size": "87174707",
	I0719 19:05:34.377339   51196 command_runner.go:130] >       "uid": null,
	I0719 19:05:34.377360   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.377364   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.377370   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.377374   51196 command_runner.go:130] >     },
	I0719 19:05:34.377386   51196 command_runner.go:130] >     {
	I0719 19:05:34.377394   51196 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0719 19:05:34.377398   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.377404   51196 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0719 19:05:34.377408   51196 command_runner.go:130] >       ],
	I0719 19:05:34.377412   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.377421   51196 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0719 19:05:34.377429   51196 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0719 19:05:34.377434   51196 command_runner.go:130] >       ],
	I0719 19:05:34.377438   51196 command_runner.go:130] >       "size": "1363676",
	I0719 19:05:34.377443   51196 command_runner.go:130] >       "uid": null,
	I0719 19:05:34.377447   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.377453   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.377457   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.377462   51196 command_runner.go:130] >     },
	I0719 19:05:34.377465   51196 command_runner.go:130] >     {
	I0719 19:05:34.377471   51196 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0719 19:05:34.377477   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.377482   51196 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0719 19:05:34.377488   51196 command_runner.go:130] >       ],
	I0719 19:05:34.377492   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.377501   51196 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0719 19:05:34.377518   51196 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0719 19:05:34.377524   51196 command_runner.go:130] >       ],
	I0719 19:05:34.377528   51196 command_runner.go:130] >       "size": "31470524",
	I0719 19:05:34.377534   51196 command_runner.go:130] >       "uid": null,
	I0719 19:05:34.377538   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.377546   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.377551   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.377558   51196 command_runner.go:130] >     },
	I0719 19:05:34.377564   51196 command_runner.go:130] >     {
	I0719 19:05:34.377570   51196 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0719 19:05:34.377576   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.377581   51196 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0719 19:05:34.377584   51196 command_runner.go:130] >       ],
	I0719 19:05:34.377588   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.377599   51196 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0719 19:05:34.377608   51196 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0719 19:05:34.377614   51196 command_runner.go:130] >       ],
	I0719 19:05:34.377618   51196 command_runner.go:130] >       "size": "61245718",
	I0719 19:05:34.377624   51196 command_runner.go:130] >       "uid": null,
	I0719 19:05:34.377628   51196 command_runner.go:130] >       "username": "nonroot",
	I0719 19:05:34.377633   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.377638   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.377643   51196 command_runner.go:130] >     },
	I0719 19:05:34.377646   51196 command_runner.go:130] >     {
	I0719 19:05:34.377654   51196 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0719 19:05:34.377659   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.377663   51196 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0719 19:05:34.377669   51196 command_runner.go:130] >       ],
	I0719 19:05:34.377673   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.377682   51196 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0719 19:05:34.377691   51196 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0719 19:05:34.377696   51196 command_runner.go:130] >       ],
	I0719 19:05:34.377700   51196 command_runner.go:130] >       "size": "150779692",
	I0719 19:05:34.377706   51196 command_runner.go:130] >       "uid": {
	I0719 19:05:34.377710   51196 command_runner.go:130] >         "value": "0"
	I0719 19:05:34.377716   51196 command_runner.go:130] >       },
	I0719 19:05:34.377720   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.377726   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.377730   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.377736   51196 command_runner.go:130] >     },
	I0719 19:05:34.377739   51196 command_runner.go:130] >     {
	I0719 19:05:34.377745   51196 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0719 19:05:34.377751   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.377760   51196 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0719 19:05:34.377766   51196 command_runner.go:130] >       ],
	I0719 19:05:34.377770   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.377779   51196 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0719 19:05:34.377788   51196 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0719 19:05:34.377793   51196 command_runner.go:130] >       ],
	I0719 19:05:34.377797   51196 command_runner.go:130] >       "size": "117609954",
	I0719 19:05:34.377803   51196 command_runner.go:130] >       "uid": {
	I0719 19:05:34.377806   51196 command_runner.go:130] >         "value": "0"
	I0719 19:05:34.377812   51196 command_runner.go:130] >       },
	I0719 19:05:34.377816   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.377822   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.377826   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.377832   51196 command_runner.go:130] >     },
	I0719 19:05:34.377836   51196 command_runner.go:130] >     {
	I0719 19:05:34.377842   51196 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0719 19:05:34.377846   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.377853   51196 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0719 19:05:34.377859   51196 command_runner.go:130] >       ],
	I0719 19:05:34.377863   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.377883   51196 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0719 19:05:34.377892   51196 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0719 19:05:34.377895   51196 command_runner.go:130] >       ],
	I0719 19:05:34.377899   51196 command_runner.go:130] >       "size": "112198984",
	I0719 19:05:34.377902   51196 command_runner.go:130] >       "uid": {
	I0719 19:05:34.377906   51196 command_runner.go:130] >         "value": "0"
	I0719 19:05:34.377910   51196 command_runner.go:130] >       },
	I0719 19:05:34.377914   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.377917   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.377921   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.377924   51196 command_runner.go:130] >     },
	I0719 19:05:34.377927   51196 command_runner.go:130] >     {
	I0719 19:05:34.377933   51196 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0719 19:05:34.377936   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.377941   51196 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0719 19:05:34.377944   51196 command_runner.go:130] >       ],
	I0719 19:05:34.377952   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.377959   51196 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0719 19:05:34.377965   51196 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0719 19:05:34.377968   51196 command_runner.go:130] >       ],
	I0719 19:05:34.377975   51196 command_runner.go:130] >       "size": "85953945",
	I0719 19:05:34.377978   51196 command_runner.go:130] >       "uid": null,
	I0719 19:05:34.377981   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.377985   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.377988   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.377991   51196 command_runner.go:130] >     },
	I0719 19:05:34.377994   51196 command_runner.go:130] >     {
	I0719 19:05:34.378000   51196 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0719 19:05:34.378006   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.378011   51196 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0719 19:05:34.378016   51196 command_runner.go:130] >       ],
	I0719 19:05:34.378020   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.378027   51196 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0719 19:05:34.378036   51196 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0719 19:05:34.378041   51196 command_runner.go:130] >       ],
	I0719 19:05:34.378045   51196 command_runner.go:130] >       "size": "63051080",
	I0719 19:05:34.378051   51196 command_runner.go:130] >       "uid": {
	I0719 19:05:34.378054   51196 command_runner.go:130] >         "value": "0"
	I0719 19:05:34.378060   51196 command_runner.go:130] >       },
	I0719 19:05:34.378064   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.378069   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.378073   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.378079   51196 command_runner.go:130] >     },
	I0719 19:05:34.378082   51196 command_runner.go:130] >     {
	I0719 19:05:34.378091   51196 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0719 19:05:34.378096   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.378101   51196 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0719 19:05:34.378107   51196 command_runner.go:130] >       ],
	I0719 19:05:34.378111   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.378120   51196 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0719 19:05:34.378128   51196 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0719 19:05:34.378134   51196 command_runner.go:130] >       ],
	I0719 19:05:34.378138   51196 command_runner.go:130] >       "size": "750414",
	I0719 19:05:34.378144   51196 command_runner.go:130] >       "uid": {
	I0719 19:05:34.378148   51196 command_runner.go:130] >         "value": "65535"
	I0719 19:05:34.378153   51196 command_runner.go:130] >       },
	I0719 19:05:34.378157   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.378162   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.378166   51196 command_runner.go:130] >       "pinned": true
	I0719 19:05:34.378171   51196 command_runner.go:130] >     }
	I0719 19:05:34.378175   51196 command_runner.go:130] >   ]
	I0719 19:05:34.378180   51196 command_runner.go:130] > }
	I0719 19:05:34.378786   51196 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 19:05:34.378801   51196 crio.go:433] Images already preloaded, skipping extraction
	I0719 19:05:34.378848   51196 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:05:34.408674   51196 command_runner.go:130] > {
	I0719 19:05:34.408692   51196 command_runner.go:130] >   "images": [
	I0719 19:05:34.408696   51196 command_runner.go:130] >     {
	I0719 19:05:34.408704   51196 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0719 19:05:34.408708   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.408714   51196 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0719 19:05:34.408718   51196 command_runner.go:130] >       ],
	I0719 19:05:34.408722   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.408730   51196 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0719 19:05:34.408737   51196 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0719 19:05:34.408741   51196 command_runner.go:130] >       ],
	I0719 19:05:34.408745   51196 command_runner.go:130] >       "size": "87165492",
	I0719 19:05:34.408749   51196 command_runner.go:130] >       "uid": null,
	I0719 19:05:34.408753   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.408761   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.408771   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.408777   51196 command_runner.go:130] >     },
	I0719 19:05:34.408780   51196 command_runner.go:130] >     {
	I0719 19:05:34.408786   51196 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0719 19:05:34.408792   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.408797   51196 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0719 19:05:34.408801   51196 command_runner.go:130] >       ],
	I0719 19:05:34.408805   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.408814   51196 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0719 19:05:34.408821   51196 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0719 19:05:34.408827   51196 command_runner.go:130] >       ],
	I0719 19:05:34.408831   51196 command_runner.go:130] >       "size": "87174707",
	I0719 19:05:34.408835   51196 command_runner.go:130] >       "uid": null,
	I0719 19:05:34.408843   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.408849   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.408853   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.408856   51196 command_runner.go:130] >     },
	I0719 19:05:34.408860   51196 command_runner.go:130] >     {
	I0719 19:05:34.408866   51196 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0719 19:05:34.408872   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.408877   51196 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0719 19:05:34.408884   51196 command_runner.go:130] >       ],
	I0719 19:05:34.408889   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.408898   51196 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0719 19:05:34.408907   51196 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0719 19:05:34.408913   51196 command_runner.go:130] >       ],
	I0719 19:05:34.408917   51196 command_runner.go:130] >       "size": "1363676",
	I0719 19:05:34.408923   51196 command_runner.go:130] >       "uid": null,
	I0719 19:05:34.408927   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.408936   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.408943   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.408947   51196 command_runner.go:130] >     },
	I0719 19:05:34.408950   51196 command_runner.go:130] >     {
	I0719 19:05:34.408956   51196 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0719 19:05:34.408962   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.408967   51196 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0719 19:05:34.408978   51196 command_runner.go:130] >       ],
	I0719 19:05:34.408986   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.408994   51196 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0719 19:05:34.409009   51196 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0719 19:05:34.409015   51196 command_runner.go:130] >       ],
	I0719 19:05:34.409019   51196 command_runner.go:130] >       "size": "31470524",
	I0719 19:05:34.409025   51196 command_runner.go:130] >       "uid": null,
	I0719 19:05:34.409029   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.409035   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.409038   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.409044   51196 command_runner.go:130] >     },
	I0719 19:05:34.409047   51196 command_runner.go:130] >     {
	I0719 19:05:34.409055   51196 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0719 19:05:34.409060   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.409065   51196 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0719 19:05:34.409073   51196 command_runner.go:130] >       ],
	I0719 19:05:34.409077   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.409086   51196 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0719 19:05:34.409096   51196 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0719 19:05:34.409101   51196 command_runner.go:130] >       ],
	I0719 19:05:34.409105   51196 command_runner.go:130] >       "size": "61245718",
	I0719 19:05:34.409111   51196 command_runner.go:130] >       "uid": null,
	I0719 19:05:34.409116   51196 command_runner.go:130] >       "username": "nonroot",
	I0719 19:05:34.409121   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.409126   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.409131   51196 command_runner.go:130] >     },
	I0719 19:05:34.409135   51196 command_runner.go:130] >     {
	I0719 19:05:34.409144   51196 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0719 19:05:34.409150   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.409155   51196 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0719 19:05:34.409161   51196 command_runner.go:130] >       ],
	I0719 19:05:34.409166   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.409174   51196 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0719 19:05:34.409181   51196 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0719 19:05:34.409186   51196 command_runner.go:130] >       ],
	I0719 19:05:34.409190   51196 command_runner.go:130] >       "size": "150779692",
	I0719 19:05:34.409200   51196 command_runner.go:130] >       "uid": {
	I0719 19:05:34.409206   51196 command_runner.go:130] >         "value": "0"
	I0719 19:05:34.409212   51196 command_runner.go:130] >       },
	I0719 19:05:34.409218   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.409222   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.409228   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.409232   51196 command_runner.go:130] >     },
	I0719 19:05:34.409237   51196 command_runner.go:130] >     {
	I0719 19:05:34.409243   51196 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0719 19:05:34.409249   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.409254   51196 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0719 19:05:34.409259   51196 command_runner.go:130] >       ],
	I0719 19:05:34.409263   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.409272   51196 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0719 19:05:34.409290   51196 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0719 19:05:34.409296   51196 command_runner.go:130] >       ],
	I0719 19:05:34.409300   51196 command_runner.go:130] >       "size": "117609954",
	I0719 19:05:34.409306   51196 command_runner.go:130] >       "uid": {
	I0719 19:05:34.409310   51196 command_runner.go:130] >         "value": "0"
	I0719 19:05:34.409315   51196 command_runner.go:130] >       },
	I0719 19:05:34.409319   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.409325   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.409329   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.409334   51196 command_runner.go:130] >     },
	I0719 19:05:34.409338   51196 command_runner.go:130] >     {
	I0719 19:05:34.409354   51196 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0719 19:05:34.409365   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.409370   51196 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0719 19:05:34.409374   51196 command_runner.go:130] >       ],
	I0719 19:05:34.409377   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.409425   51196 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0719 19:05:34.409437   51196 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0719 19:05:34.409441   51196 command_runner.go:130] >       ],
	I0719 19:05:34.409445   51196 command_runner.go:130] >       "size": "112198984",
	I0719 19:05:34.409451   51196 command_runner.go:130] >       "uid": {
	I0719 19:05:34.409454   51196 command_runner.go:130] >         "value": "0"
	I0719 19:05:34.409465   51196 command_runner.go:130] >       },
	I0719 19:05:34.409471   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.409475   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.409481   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.409484   51196 command_runner.go:130] >     },
	I0719 19:05:34.409490   51196 command_runner.go:130] >     {
	I0719 19:05:34.409496   51196 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0719 19:05:34.409502   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.409507   51196 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0719 19:05:34.409513   51196 command_runner.go:130] >       ],
	I0719 19:05:34.409517   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.409526   51196 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0719 19:05:34.409537   51196 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0719 19:05:34.409543   51196 command_runner.go:130] >       ],
	I0719 19:05:34.409547   51196 command_runner.go:130] >       "size": "85953945",
	I0719 19:05:34.409553   51196 command_runner.go:130] >       "uid": null,
	I0719 19:05:34.409558   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.409564   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.409568   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.409574   51196 command_runner.go:130] >     },
	I0719 19:05:34.409577   51196 command_runner.go:130] >     {
	I0719 19:05:34.409585   51196 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0719 19:05:34.409589   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.409596   51196 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0719 19:05:34.409601   51196 command_runner.go:130] >       ],
	I0719 19:05:34.409605   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.409614   51196 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0719 19:05:34.409623   51196 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0719 19:05:34.409629   51196 command_runner.go:130] >       ],
	I0719 19:05:34.409633   51196 command_runner.go:130] >       "size": "63051080",
	I0719 19:05:34.409639   51196 command_runner.go:130] >       "uid": {
	I0719 19:05:34.409642   51196 command_runner.go:130] >         "value": "0"
	I0719 19:05:34.409648   51196 command_runner.go:130] >       },
	I0719 19:05:34.409652   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.409658   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.409662   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.409673   51196 command_runner.go:130] >     },
	I0719 19:05:34.409679   51196 command_runner.go:130] >     {
	I0719 19:05:34.409684   51196 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0719 19:05:34.409690   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.409695   51196 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0719 19:05:34.409701   51196 command_runner.go:130] >       ],
	I0719 19:05:34.409705   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.409713   51196 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0719 19:05:34.409722   51196 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0719 19:05:34.409728   51196 command_runner.go:130] >       ],
	I0719 19:05:34.409732   51196 command_runner.go:130] >       "size": "750414",
	I0719 19:05:34.409735   51196 command_runner.go:130] >       "uid": {
	I0719 19:05:34.409739   51196 command_runner.go:130] >         "value": "65535"
	I0719 19:05:34.409745   51196 command_runner.go:130] >       },
	I0719 19:05:34.409748   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.409755   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.409759   51196 command_runner.go:130] >       "pinned": true
	I0719 19:05:34.409764   51196 command_runner.go:130] >     }
	I0719 19:05:34.409767   51196 command_runner.go:130] >   ]
	I0719 19:05:34.409773   51196 command_runner.go:130] > }
	I0719 19:05:34.410143   51196 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 19:05:34.410165   51196 cache_images.go:84] Images are preloaded, skipping loading
	I0719 19:05:34.410174   51196 kubeadm.go:934] updating node { 192.168.39.154 8443 v1.30.3 crio true true} ...
	I0719 19:05:34.410299   51196 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-269079 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.154
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-269079 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 19:05:34.410372   51196 ssh_runner.go:195] Run: crio config
	I0719 19:05:34.449879   51196 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0719 19:05:34.449904   51196 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0719 19:05:34.449916   51196 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0719 19:05:34.449921   51196 command_runner.go:130] > #
	I0719 19:05:34.449931   51196 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0719 19:05:34.449940   51196 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0719 19:05:34.449949   51196 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0719 19:05:34.449984   51196 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0719 19:05:34.449995   51196 command_runner.go:130] > # reload'.
	I0719 19:05:34.450005   51196 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0719 19:05:34.450017   51196 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0719 19:05:34.450031   51196 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0719 19:05:34.450044   51196 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0719 19:05:34.450051   51196 command_runner.go:130] > [crio]
	I0719 19:05:34.450062   51196 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0719 19:05:34.450075   51196 command_runner.go:130] > # containers images, in this directory.
	I0719 19:05:34.450094   51196 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0719 19:05:34.450117   51196 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0719 19:05:34.450129   51196 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0719 19:05:34.450141   51196 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0719 19:05:34.450151   51196 command_runner.go:130] > # imagestore = ""
	I0719 19:05:34.450164   51196 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0719 19:05:34.450178   51196 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0719 19:05:34.450216   51196 command_runner.go:130] > storage_driver = "overlay"
	I0719 19:05:34.450233   51196 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0719 19:05:34.450244   51196 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0719 19:05:34.450254   51196 command_runner.go:130] > storage_option = [
	I0719 19:05:34.450263   51196 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0719 19:05:34.450273   51196 command_runner.go:130] > ]
	I0719 19:05:34.450285   51196 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0719 19:05:34.450304   51196 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0719 19:05:34.450315   51196 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0719 19:05:34.450327   51196 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0719 19:05:34.450340   51196 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0719 19:05:34.450348   51196 command_runner.go:130] > # always happen on a node reboot
	I0719 19:05:34.450360   51196 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0719 19:05:34.450384   51196 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0719 19:05:34.450398   51196 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0719 19:05:34.450409   51196 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0719 19:05:34.450417   51196 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0719 19:05:34.450433   51196 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0719 19:05:34.450449   51196 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0719 19:05:34.450459   51196 command_runner.go:130] > # internal_wipe = true
	I0719 19:05:34.450473   51196 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0719 19:05:34.450486   51196 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0719 19:05:34.450495   51196 command_runner.go:130] > # internal_repair = false
	I0719 19:05:34.450508   51196 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0719 19:05:34.450520   51196 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0719 19:05:34.450533   51196 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0719 19:05:34.450542   51196 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0719 19:05:34.450564   51196 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0719 19:05:34.450574   51196 command_runner.go:130] > [crio.api]
	I0719 19:05:34.450592   51196 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0719 19:05:34.450606   51196 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0719 19:05:34.450617   51196 command_runner.go:130] > # IP address on which the stream server will listen.
	I0719 19:05:34.450627   51196 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0719 19:05:34.450640   51196 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0719 19:05:34.450652   51196 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0719 19:05:34.450663   51196 command_runner.go:130] > # stream_port = "0"
	I0719 19:05:34.450673   51196 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0719 19:05:34.450684   51196 command_runner.go:130] > # stream_enable_tls = false
	I0719 19:05:34.450695   51196 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0719 19:05:34.450705   51196 command_runner.go:130] > # stream_idle_timeout = ""
	I0719 19:05:34.450716   51196 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0719 19:05:34.450729   51196 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0719 19:05:34.450739   51196 command_runner.go:130] > # minutes.
	I0719 19:05:34.450746   51196 command_runner.go:130] > # stream_tls_cert = ""
	I0719 19:05:34.450761   51196 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0719 19:05:34.450775   51196 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0719 19:05:34.450783   51196 command_runner.go:130] > # stream_tls_key = ""
	I0719 19:05:34.450796   51196 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0719 19:05:34.450810   51196 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0719 19:05:34.450841   51196 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0719 19:05:34.450852   51196 command_runner.go:130] > # stream_tls_ca = ""
	I0719 19:05:34.450865   51196 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0719 19:05:34.450875   51196 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0719 19:05:34.450890   51196 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0719 19:05:34.450901   51196 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0719 19:05:34.450914   51196 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0719 19:05:34.450927   51196 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0719 19:05:34.450937   51196 command_runner.go:130] > [crio.runtime]
	I0719 19:05:34.450948   51196 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0719 19:05:34.450961   51196 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0719 19:05:34.450975   51196 command_runner.go:130] > # "nofile=1024:2048"
	I0719 19:05:34.450989   51196 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0719 19:05:34.450999   51196 command_runner.go:130] > # default_ulimits = [
	I0719 19:05:34.451006   51196 command_runner.go:130] > # ]
	I0719 19:05:34.451018   51196 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0719 19:05:34.451034   51196 command_runner.go:130] > # no_pivot = false
	I0719 19:05:34.451045   51196 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0719 19:05:34.451059   51196 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0719 19:05:34.451071   51196 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0719 19:05:34.451085   51196 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0719 19:05:34.451097   51196 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0719 19:05:34.451109   51196 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0719 19:05:34.451120   51196 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0719 19:05:34.451128   51196 command_runner.go:130] > # Cgroup setting for conmon
	I0719 19:05:34.451142   51196 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0719 19:05:34.451153   51196 command_runner.go:130] > conmon_cgroup = "pod"
	I0719 19:05:34.451166   51196 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0719 19:05:34.451178   51196 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0719 19:05:34.451191   51196 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0719 19:05:34.451201   51196 command_runner.go:130] > conmon_env = [
	I0719 19:05:34.451215   51196 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0719 19:05:34.451227   51196 command_runner.go:130] > ]
	I0719 19:05:34.451240   51196 command_runner.go:130] > # Additional environment variables to set for all the
	I0719 19:05:34.451252   51196 command_runner.go:130] > # containers. These are overridden if set in the
	I0719 19:05:34.451264   51196 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0719 19:05:34.451274   51196 command_runner.go:130] > # default_env = [
	I0719 19:05:34.451281   51196 command_runner.go:130] > # ]
	I0719 19:05:34.451291   51196 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0719 19:05:34.451305   51196 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0719 19:05:34.451311   51196 command_runner.go:130] > # selinux = false
	I0719 19:05:34.451321   51196 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0719 19:05:34.451331   51196 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0719 19:05:34.451341   51196 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0719 19:05:34.451353   51196 command_runner.go:130] > # seccomp_profile = ""
	I0719 19:05:34.451365   51196 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0719 19:05:34.451377   51196 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0719 19:05:34.451388   51196 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0719 19:05:34.451399   51196 command_runner.go:130] > # which might increase security.
	I0719 19:05:34.451410   51196 command_runner.go:130] > # This option is currently deprecated,
	I0719 19:05:34.451420   51196 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0719 19:05:34.451431   51196 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0719 19:05:34.451448   51196 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0719 19:05:34.451463   51196 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0719 19:05:34.451477   51196 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0719 19:05:34.451491   51196 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0719 19:05:34.451502   51196 command_runner.go:130] > # This option supports live configuration reload.
	I0719 19:05:34.451516   51196 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0719 19:05:34.451530   51196 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0719 19:05:34.451540   51196 command_runner.go:130] > # the cgroup blockio controller.
	I0719 19:05:34.451549   51196 command_runner.go:130] > # blockio_config_file = ""
	I0719 19:05:34.451566   51196 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0719 19:05:34.451576   51196 command_runner.go:130] > # blockio parameters.
	I0719 19:05:34.451586   51196 command_runner.go:130] > # blockio_reload = false
	I0719 19:05:34.451601   51196 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0719 19:05:34.451611   51196 command_runner.go:130] > # irqbalance daemon.
	I0719 19:05:34.451623   51196 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0719 19:05:34.451636   51196 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0719 19:05:34.451650   51196 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0719 19:05:34.451664   51196 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0719 19:05:34.451691   51196 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0719 19:05:34.451708   51196 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0719 19:05:34.451719   51196 command_runner.go:130] > # This option supports live configuration reload.
	I0719 19:05:34.451730   51196 command_runner.go:130] > # rdt_config_file = ""
	I0719 19:05:34.451741   51196 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0719 19:05:34.451754   51196 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0719 19:05:34.451775   51196 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0719 19:05:34.451783   51196 command_runner.go:130] > # separate_pull_cgroup = ""
	I0719 19:05:34.451793   51196 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0719 19:05:34.451807   51196 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0719 19:05:34.451817   51196 command_runner.go:130] > # will be added.
	I0719 19:05:34.451825   51196 command_runner.go:130] > # default_capabilities = [
	I0719 19:05:34.451853   51196 command_runner.go:130] > # 	"CHOWN",
	I0719 19:05:34.451865   51196 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0719 19:05:34.451871   51196 command_runner.go:130] > # 	"FSETID",
	I0719 19:05:34.451878   51196 command_runner.go:130] > # 	"FOWNER",
	I0719 19:05:34.451887   51196 command_runner.go:130] > # 	"SETGID",
	I0719 19:05:34.451894   51196 command_runner.go:130] > # 	"SETUID",
	I0719 19:05:34.451905   51196 command_runner.go:130] > # 	"SETPCAP",
	I0719 19:05:34.451917   51196 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0719 19:05:34.451923   51196 command_runner.go:130] > # 	"KILL",
	I0719 19:05:34.451933   51196 command_runner.go:130] > # ]
	I0719 19:05:34.451943   51196 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0719 19:05:34.451957   51196 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0719 19:05:34.451969   51196 command_runner.go:130] > # add_inheritable_capabilities = false
	I0719 19:05:34.451980   51196 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0719 19:05:34.451995   51196 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0719 19:05:34.452005   51196 command_runner.go:130] > default_sysctls = [
	I0719 19:05:34.452019   51196 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0719 19:05:34.452028   51196 command_runner.go:130] > ]
	I0719 19:05:34.452036   51196 command_runner.go:130] > # List of devices on the host that a
	I0719 19:05:34.452047   51196 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0719 19:05:34.452059   51196 command_runner.go:130] > # allowed_devices = [
	I0719 19:05:34.452070   51196 command_runner.go:130] > # 	"/dev/fuse",
	I0719 19:05:34.452076   51196 command_runner.go:130] > # ]
	I0719 19:05:34.452089   51196 command_runner.go:130] > # List of additional devices. specified as
	I0719 19:05:34.452104   51196 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0719 19:05:34.452118   51196 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0719 19:05:34.452131   51196 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0719 19:05:34.452140   51196 command_runner.go:130] > # additional_devices = [
	I0719 19:05:34.452144   51196 command_runner.go:130] > # ]
	I0719 19:05:34.452156   51196 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0719 19:05:34.452171   51196 command_runner.go:130] > # cdi_spec_dirs = [
	I0719 19:05:34.452180   51196 command_runner.go:130] > # 	"/etc/cdi",
	I0719 19:05:34.452191   51196 command_runner.go:130] > # 	"/var/run/cdi",
	I0719 19:05:34.452201   51196 command_runner.go:130] > # ]
	I0719 19:05:34.452212   51196 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0719 19:05:34.452227   51196 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0719 19:05:34.452237   51196 command_runner.go:130] > # Defaults to false.
	I0719 19:05:34.452243   51196 command_runner.go:130] > # device_ownership_from_security_context = false
	I0719 19:05:34.452257   51196 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0719 19:05:34.452270   51196 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0719 19:05:34.452282   51196 command_runner.go:130] > # hooks_dir = [
	I0719 19:05:34.452293   51196 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0719 19:05:34.452304   51196 command_runner.go:130] > # ]
	I0719 19:05:34.452319   51196 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0719 19:05:34.452333   51196 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0719 19:05:34.452343   51196 command_runner.go:130] > # its default mounts from the following two files:
	I0719 19:05:34.452348   51196 command_runner.go:130] > #
	I0719 19:05:34.452362   51196 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0719 19:05:34.452376   51196 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0719 19:05:34.452389   51196 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0719 19:05:34.452398   51196 command_runner.go:130] > #
	I0719 19:05:34.452410   51196 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0719 19:05:34.452424   51196 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0719 19:05:34.452436   51196 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0719 19:05:34.452448   51196 command_runner.go:130] > #      only add mounts it finds in this file.
	I0719 19:05:34.452459   51196 command_runner.go:130] > #
	I0719 19:05:34.452470   51196 command_runner.go:130] > # default_mounts_file = ""
	I0719 19:05:34.452479   51196 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0719 19:05:34.452493   51196 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0719 19:05:34.452503   51196 command_runner.go:130] > pids_limit = 1024
	I0719 19:05:34.452512   51196 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0719 19:05:34.452523   51196 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0719 19:05:34.452536   51196 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0719 19:05:34.452561   51196 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0719 19:05:34.452572   51196 command_runner.go:130] > # log_size_max = -1
	I0719 19:05:34.452587   51196 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0719 19:05:34.452598   51196 command_runner.go:130] > # log_to_journald = false
	I0719 19:05:34.452605   51196 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0719 19:05:34.452616   51196 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0719 19:05:34.452629   51196 command_runner.go:130] > # Path to directory for container attach sockets.
	I0719 19:05:34.452643   51196 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0719 19:05:34.452657   51196 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0719 19:05:34.452668   51196 command_runner.go:130] > # bind_mount_prefix = ""
	I0719 19:05:34.452682   51196 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0719 19:05:34.452692   51196 command_runner.go:130] > # read_only = false
	I0719 19:05:34.452700   51196 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0719 19:05:34.452709   51196 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0719 19:05:34.452717   51196 command_runner.go:130] > # live configuration reload.
	I0719 19:05:34.452726   51196 command_runner.go:130] > # log_level = "info"
	I0719 19:05:34.452736   51196 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0719 19:05:34.452744   51196 command_runner.go:130] > # This option supports live configuration reload.
	I0719 19:05:34.452750   51196 command_runner.go:130] > # log_filter = ""
	I0719 19:05:34.452759   51196 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0719 19:05:34.452769   51196 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0719 19:05:34.452775   51196 command_runner.go:130] > # separated by comma.
	I0719 19:05:34.452785   51196 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0719 19:05:34.452792   51196 command_runner.go:130] > # uid_mappings = ""
	I0719 19:05:34.452802   51196 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0719 19:05:34.452816   51196 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0719 19:05:34.452825   51196 command_runner.go:130] > # separated by comma.
	I0719 19:05:34.452840   51196 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0719 19:05:34.452851   51196 command_runner.go:130] > # gid_mappings = ""
	I0719 19:05:34.452864   51196 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0719 19:05:34.452877   51196 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0719 19:05:34.452891   51196 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0719 19:05:34.452904   51196 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0719 19:05:34.452915   51196 command_runner.go:130] > # minimum_mappable_uid = -1
	I0719 19:05:34.452929   51196 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0719 19:05:34.452943   51196 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0719 19:05:34.452956   51196 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0719 19:05:34.452968   51196 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0719 19:05:34.452977   51196 command_runner.go:130] > # minimum_mappable_gid = -1
	I0719 19:05:34.452991   51196 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0719 19:05:34.453001   51196 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0719 19:05:34.453015   51196 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0719 19:05:34.453029   51196 command_runner.go:130] > # ctr_stop_timeout = 30
	I0719 19:05:34.453045   51196 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0719 19:05:34.453058   51196 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0719 19:05:34.453068   51196 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0719 19:05:34.453077   51196 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0719 19:05:34.453088   51196 command_runner.go:130] > drop_infra_ctr = false
	I0719 19:05:34.453099   51196 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0719 19:05:34.453112   51196 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0719 19:05:34.453127   51196 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0719 19:05:34.453140   51196 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0719 19:05:34.453155   51196 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0719 19:05:34.453166   51196 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0719 19:05:34.453179   51196 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0719 19:05:34.453192   51196 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0719 19:05:34.453200   51196 command_runner.go:130] > # shared_cpuset = ""
	I0719 19:05:34.453214   51196 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0719 19:05:34.453227   51196 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0719 19:05:34.453238   51196 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0719 19:05:34.453249   51196 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0719 19:05:34.453260   51196 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0719 19:05:34.453270   51196 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0719 19:05:34.453284   51196 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0719 19:05:34.453296   51196 command_runner.go:130] > # enable_criu_support = false
	I0719 19:05:34.453304   51196 command_runner.go:130] > # Enable/disable the generation of the container,
	I0719 19:05:34.453318   51196 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0719 19:05:34.453330   51196 command_runner.go:130] > # enable_pod_events = false
	I0719 19:05:34.453344   51196 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0719 19:05:34.453359   51196 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0719 19:05:34.453371   51196 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0719 19:05:34.453380   51196 command_runner.go:130] > # default_runtime = "runc"
	I0719 19:05:34.453388   51196 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0719 19:05:34.453405   51196 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0719 19:05:34.453422   51196 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0719 19:05:34.453435   51196 command_runner.go:130] > # creation as a file is not desired either.
	I0719 19:05:34.453452   51196 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0719 19:05:34.453468   51196 command_runner.go:130] > # the hostname is being managed dynamically.
	I0719 19:05:34.453480   51196 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0719 19:05:34.453489   51196 command_runner.go:130] > # ]
	I0719 19:05:34.453500   51196 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0719 19:05:34.453515   51196 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0719 19:05:34.453529   51196 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0719 19:05:34.453541   51196 command_runner.go:130] > # Each entry in the table should follow the format:
	I0719 19:05:34.453550   51196 command_runner.go:130] > #
	I0719 19:05:34.453562   51196 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0719 19:05:34.453572   51196 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0719 19:05:34.453596   51196 command_runner.go:130] > # runtime_type = "oci"
	I0719 19:05:34.453608   51196 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0719 19:05:34.453617   51196 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0719 19:05:34.453628   51196 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0719 19:05:34.453640   51196 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0719 19:05:34.453651   51196 command_runner.go:130] > # monitor_env = []
	I0719 19:05:34.453663   51196 command_runner.go:130] > # privileged_without_host_devices = false
	I0719 19:05:34.453673   51196 command_runner.go:130] > # allowed_annotations = []
	I0719 19:05:34.453683   51196 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0719 19:05:34.453689   51196 command_runner.go:130] > # Where:
	I0719 19:05:34.453701   51196 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0719 19:05:34.453715   51196 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0719 19:05:34.453730   51196 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0719 19:05:34.453744   51196 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0719 19:05:34.453755   51196 command_runner.go:130] > #   in $PATH.
	I0719 19:05:34.453766   51196 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0719 19:05:34.453778   51196 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0719 19:05:34.453788   51196 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0719 19:05:34.453799   51196 command_runner.go:130] > #   state.
	I0719 19:05:34.453814   51196 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0719 19:05:34.453828   51196 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0719 19:05:34.453842   51196 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0719 19:05:34.453855   51196 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0719 19:05:34.453869   51196 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0719 19:05:34.453882   51196 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0719 19:05:34.453892   51196 command_runner.go:130] > #   The currently recognized values are:
	I0719 19:05:34.453907   51196 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0719 19:05:34.453923   51196 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0719 19:05:34.453940   51196 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0719 19:05:34.453955   51196 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0719 19:05:34.453968   51196 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0719 19:05:34.453981   51196 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0719 19:05:34.453996   51196 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0719 19:05:34.454011   51196 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0719 19:05:34.454025   51196 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0719 19:05:34.454039   51196 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0719 19:05:34.454052   51196 command_runner.go:130] > #   deprecated option "conmon".
	I0719 19:05:34.454068   51196 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0719 19:05:34.454080   51196 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0719 19:05:34.454094   51196 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0719 19:05:34.454107   51196 command_runner.go:130] > #   should be moved to the container's cgroup
	I0719 19:05:34.454122   51196 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0719 19:05:34.454134   51196 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0719 19:05:34.454145   51196 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0719 19:05:34.454157   51196 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0719 19:05:34.454167   51196 command_runner.go:130] > #
	I0719 19:05:34.454175   51196 command_runner.go:130] > # Using the seccomp notifier feature:
	I0719 19:05:34.454185   51196 command_runner.go:130] > #
	I0719 19:05:34.454196   51196 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0719 19:05:34.454210   51196 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0719 19:05:34.454219   51196 command_runner.go:130] > #
	I0719 19:05:34.454230   51196 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0719 19:05:34.454243   51196 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0719 19:05:34.454251   51196 command_runner.go:130] > #
	I0719 19:05:34.454265   51196 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0719 19:05:34.454276   51196 command_runner.go:130] > # feature.
	I0719 19:05:34.454282   51196 command_runner.go:130] > #
	I0719 19:05:34.454295   51196 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0719 19:05:34.454310   51196 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0719 19:05:34.454323   51196 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0719 19:05:34.454335   51196 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0719 19:05:34.454349   51196 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0719 19:05:34.454359   51196 command_runner.go:130] > #
	I0719 19:05:34.454370   51196 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0719 19:05:34.454386   51196 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0719 19:05:34.454395   51196 command_runner.go:130] > #
	I0719 19:05:34.454406   51196 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0719 19:05:34.454420   51196 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0719 19:05:34.454430   51196 command_runner.go:130] > #
	I0719 19:05:34.454440   51196 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0719 19:05:34.454454   51196 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0719 19:05:34.454465   51196 command_runner.go:130] > # limitation.
	I0719 19:05:34.454473   51196 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0719 19:05:34.454485   51196 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0719 19:05:34.454496   51196 command_runner.go:130] > runtime_type = "oci"
	I0719 19:05:34.454507   51196 command_runner.go:130] > runtime_root = "/run/runc"
	I0719 19:05:34.454531   51196 command_runner.go:130] > runtime_config_path = ""
	I0719 19:05:34.454537   51196 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0719 19:05:34.454548   51196 command_runner.go:130] > monitor_cgroup = "pod"
	I0719 19:05:34.454561   51196 command_runner.go:130] > monitor_exec_cgroup = ""
	I0719 19:05:34.454572   51196 command_runner.go:130] > monitor_env = [
	I0719 19:05:34.454582   51196 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0719 19:05:34.454591   51196 command_runner.go:130] > ]
	I0719 19:05:34.454604   51196 command_runner.go:130] > privileged_without_host_devices = false
	I0719 19:05:34.454619   51196 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0719 19:05:34.454631   51196 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0719 19:05:34.454644   51196 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0719 19:05:34.454657   51196 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0719 19:05:34.454674   51196 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0719 19:05:34.454688   51196 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0719 19:05:34.454706   51196 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0719 19:05:34.454723   51196 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0719 19:05:34.454733   51196 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0719 19:05:34.454748   51196 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0719 19:05:34.454759   51196 command_runner.go:130] > # Example:
	I0719 19:05:34.454768   51196 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0719 19:05:34.454776   51196 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0719 19:05:34.454784   51196 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0719 19:05:34.454793   51196 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0719 19:05:34.454800   51196 command_runner.go:130] > # cpuset = 0
	I0719 19:05:34.454807   51196 command_runner.go:130] > # cpushares = "0-1"
	I0719 19:05:34.454813   51196 command_runner.go:130] > # Where:
	I0719 19:05:34.454828   51196 command_runner.go:130] > # The workload name is workload-type.
	I0719 19:05:34.454836   51196 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0719 19:05:34.454844   51196 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0719 19:05:34.454853   51196 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0719 19:05:34.454867   51196 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0719 19:05:34.454876   51196 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0719 19:05:34.454887   51196 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0719 19:05:34.454899   51196 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0719 19:05:34.454907   51196 command_runner.go:130] > # Default value is set to true
	I0719 19:05:34.454915   51196 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0719 19:05:34.454922   51196 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0719 19:05:34.454927   51196 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0719 19:05:34.454935   51196 command_runner.go:130] > # Default value is set to 'false'
	I0719 19:05:34.454957   51196 command_runner.go:130] > # disable_hostport_mapping = false
	I0719 19:05:34.454972   51196 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0719 19:05:34.454982   51196 command_runner.go:130] > #
	I0719 19:05:34.455068   51196 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0719 19:05:34.455088   51196 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0719 19:05:34.455103   51196 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0719 19:05:34.455116   51196 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0719 19:05:34.455128   51196 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0719 19:05:34.455136   51196 command_runner.go:130] > [crio.image]
	I0719 19:05:34.455146   51196 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0719 19:05:34.455156   51196 command_runner.go:130] > # default_transport = "docker://"
	I0719 19:05:34.455165   51196 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0719 19:05:34.455192   51196 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0719 19:05:34.455203   51196 command_runner.go:130] > # global_auth_file = ""
	I0719 19:05:34.455214   51196 command_runner.go:130] > # The image used to instantiate infra containers.
	I0719 19:05:34.455225   51196 command_runner.go:130] > # This option supports live configuration reload.
	I0719 19:05:34.455235   51196 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0719 19:05:34.455249   51196 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0719 19:05:34.455262   51196 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0719 19:05:34.455272   51196 command_runner.go:130] > # This option supports live configuration reload.
	I0719 19:05:34.455279   51196 command_runner.go:130] > # pause_image_auth_file = ""
	I0719 19:05:34.455287   51196 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0719 19:05:34.455300   51196 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0719 19:05:34.455319   51196 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0719 19:05:34.455337   51196 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0719 19:05:34.455347   51196 command_runner.go:130] > # pause_command = "/pause"
	I0719 19:05:34.455358   51196 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0719 19:05:34.455367   51196 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0719 19:05:34.455376   51196 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0719 19:05:34.455392   51196 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0719 19:05:34.455405   51196 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0719 19:05:34.455418   51196 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0719 19:05:34.455427   51196 command_runner.go:130] > # pinned_images = [
	I0719 19:05:34.455435   51196 command_runner.go:130] > # ]
	I0719 19:05:34.455447   51196 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0719 19:05:34.455456   51196 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0719 19:05:34.455468   51196 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0719 19:05:34.455481   51196 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0719 19:05:34.455492   51196 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0719 19:05:34.455502   51196 command_runner.go:130] > # signature_policy = ""
	I0719 19:05:34.455514   51196 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0719 19:05:34.455528   51196 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0719 19:05:34.455541   51196 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0719 19:05:34.455549   51196 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0719 19:05:34.455558   51196 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0719 19:05:34.455568   51196 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0719 19:05:34.455581   51196 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0719 19:05:34.455593   51196 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0719 19:05:34.455602   51196 command_runner.go:130] > # changing them here.
	I0719 19:05:34.455612   51196 command_runner.go:130] > # insecure_registries = [
	I0719 19:05:34.455618   51196 command_runner.go:130] > # ]
	I0719 19:05:34.455629   51196 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0719 19:05:34.455636   51196 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0719 19:05:34.455642   51196 command_runner.go:130] > # image_volumes = "mkdir"
	I0719 19:05:34.455653   51196 command_runner.go:130] > # Temporary directory to use for storing big files
	I0719 19:05:34.455663   51196 command_runner.go:130] > # big_files_temporary_dir = ""
	I0719 19:05:34.455676   51196 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0719 19:05:34.455685   51196 command_runner.go:130] > # CNI plugins.
	I0719 19:05:34.455694   51196 command_runner.go:130] > [crio.network]
	I0719 19:05:34.455706   51196 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0719 19:05:34.455718   51196 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0719 19:05:34.455725   51196 command_runner.go:130] > # cni_default_network = ""
	I0719 19:05:34.455735   51196 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0719 19:05:34.455745   51196 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0719 19:05:34.455754   51196 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0719 19:05:34.455765   51196 command_runner.go:130] > # plugin_dirs = [
	I0719 19:05:34.455773   51196 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0719 19:05:34.455781   51196 command_runner.go:130] > # ]
	I0719 19:05:34.455791   51196 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0719 19:05:34.455799   51196 command_runner.go:130] > [crio.metrics]
	I0719 19:05:34.455808   51196 command_runner.go:130] > # Globally enable or disable metrics support.
	I0719 19:05:34.455815   51196 command_runner.go:130] > enable_metrics = true
	I0719 19:05:34.455820   51196 command_runner.go:130] > # Specify enabled metrics collectors.
	I0719 19:05:34.455829   51196 command_runner.go:130] > # Per default all metrics are enabled.
	I0719 19:05:34.455852   51196 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0719 19:05:34.455863   51196 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0719 19:05:34.455875   51196 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0719 19:05:34.455885   51196 command_runner.go:130] > # metrics_collectors = [
	I0719 19:05:34.455893   51196 command_runner.go:130] > # 	"operations",
	I0719 19:05:34.455904   51196 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0719 19:05:34.455913   51196 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0719 19:05:34.455921   51196 command_runner.go:130] > # 	"operations_errors",
	I0719 19:05:34.455925   51196 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0719 19:05:34.455933   51196 command_runner.go:130] > # 	"image_pulls_by_name",
	I0719 19:05:34.455943   51196 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0719 19:05:34.455954   51196 command_runner.go:130] > # 	"image_pulls_failures",
	I0719 19:05:34.455964   51196 command_runner.go:130] > # 	"image_pulls_successes",
	I0719 19:05:34.455974   51196 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0719 19:05:34.455983   51196 command_runner.go:130] > # 	"image_layer_reuse",
	I0719 19:05:34.455993   51196 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0719 19:05:34.456002   51196 command_runner.go:130] > # 	"containers_oom_total",
	I0719 19:05:34.456010   51196 command_runner.go:130] > # 	"containers_oom",
	I0719 19:05:34.456019   51196 command_runner.go:130] > # 	"processes_defunct",
	I0719 19:05:34.456029   51196 command_runner.go:130] > # 	"operations_total",
	I0719 19:05:34.456039   51196 command_runner.go:130] > # 	"operations_latency_seconds",
	I0719 19:05:34.456050   51196 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0719 19:05:34.456060   51196 command_runner.go:130] > # 	"operations_errors_total",
	I0719 19:05:34.456069   51196 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0719 19:05:34.456079   51196 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0719 19:05:34.456088   51196 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0719 19:05:34.456095   51196 command_runner.go:130] > # 	"image_pulls_success_total",
	I0719 19:05:34.456103   51196 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0719 19:05:34.456112   51196 command_runner.go:130] > # 	"containers_oom_count_total",
	I0719 19:05:34.456124   51196 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0719 19:05:34.456133   51196 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0719 19:05:34.456145   51196 command_runner.go:130] > # ]
	I0719 19:05:34.456156   51196 command_runner.go:130] > # The port on which the metrics server will listen.
	I0719 19:05:34.456165   51196 command_runner.go:130] > # metrics_port = 9090
	I0719 19:05:34.456174   51196 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0719 19:05:34.456180   51196 command_runner.go:130] > # metrics_socket = ""
	I0719 19:05:34.456187   51196 command_runner.go:130] > # The certificate for the secure metrics server.
	I0719 19:05:34.456200   51196 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0719 19:05:34.456213   51196 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0719 19:05:34.456223   51196 command_runner.go:130] > # certificate on any modification event.
	I0719 19:05:34.456233   51196 command_runner.go:130] > # metrics_cert = ""
	I0719 19:05:34.456244   51196 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0719 19:05:34.456254   51196 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0719 19:05:34.456264   51196 command_runner.go:130] > # metrics_key = ""
	I0719 19:05:34.456275   51196 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0719 19:05:34.456281   51196 command_runner.go:130] > [crio.tracing]
	I0719 19:05:34.456288   51196 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0719 19:05:34.456297   51196 command_runner.go:130] > # enable_tracing = false
	I0719 19:05:34.456309   51196 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0719 19:05:34.456319   51196 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0719 19:05:34.456350   51196 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0719 19:05:34.456360   51196 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0719 19:05:34.456369   51196 command_runner.go:130] > # CRI-O NRI configuration.
	I0719 19:05:34.456376   51196 command_runner.go:130] > [crio.nri]
	I0719 19:05:34.456387   51196 command_runner.go:130] > # Globally enable or disable NRI.
	I0719 19:05:34.456396   51196 command_runner.go:130] > # enable_nri = false
	I0719 19:05:34.456406   51196 command_runner.go:130] > # NRI socket to listen on.
	I0719 19:05:34.456414   51196 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0719 19:05:34.456424   51196 command_runner.go:130] > # NRI plugin directory to use.
	I0719 19:05:34.456434   51196 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0719 19:05:34.456445   51196 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0719 19:05:34.456456   51196 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0719 19:05:34.456467   51196 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0719 19:05:34.456475   51196 command_runner.go:130] > # nri_disable_connections = false
	I0719 19:05:34.456483   51196 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0719 19:05:34.456493   51196 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0719 19:05:34.456504   51196 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0719 19:05:34.456515   51196 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0719 19:05:34.456527   51196 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0719 19:05:34.456536   51196 command_runner.go:130] > [crio.stats]
	I0719 19:05:34.456549   51196 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0719 19:05:34.456561   51196 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0719 19:05:34.456569   51196 command_runner.go:130] > # stats_collection_period = 0
	I0719 19:05:34.456611   51196 command_runner.go:130] ! time="2024-07-19 19:05:34.411390708Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0719 19:05:34.456637   51196 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0719 19:05:34.456784   51196 cni.go:84] Creating CNI manager for ""
	I0719 19:05:34.456796   51196 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0719 19:05:34.456808   51196 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 19:05:34.456842   51196 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.154 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-269079 NodeName:multinode-269079 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.154"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.154 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 19:05:34.456994   51196 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.154
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-269079"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.154
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.154"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 19:05:34.457071   51196 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 19:05:34.466414   51196 command_runner.go:130] > kubeadm
	I0719 19:05:34.466433   51196 command_runner.go:130] > kubectl
	I0719 19:05:34.466437   51196 command_runner.go:130] > kubelet
	I0719 19:05:34.466459   51196 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 19:05:34.466513   51196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 19:05:34.474978   51196 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0719 19:05:34.490124   51196 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 19:05:34.504775   51196 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0719 19:05:34.519530   51196 ssh_runner.go:195] Run: grep 192.168.39.154	control-plane.minikube.internal$ /etc/hosts
	I0719 19:05:34.522947   51196 command_runner.go:130] > 192.168.39.154	control-plane.minikube.internal
	I0719 19:05:34.523126   51196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:05:34.656921   51196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:05:34.670672   51196 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/multinode-269079 for IP: 192.168.39.154
	I0719 19:05:34.670699   51196 certs.go:194] generating shared ca certs ...
	I0719 19:05:34.670722   51196 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:05:34.670887   51196 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 19:05:34.670940   51196 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 19:05:34.670952   51196 certs.go:256] generating profile certs ...
	I0719 19:05:34.671028   51196 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/multinode-269079/client.key
	I0719 19:05:34.671083   51196 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/multinode-269079/apiserver.key.7e64e0f9
	I0719 19:05:34.671123   51196 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/multinode-269079/proxy-client.key
	I0719 19:05:34.671135   51196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 19:05:34.671148   51196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0719 19:05:34.671164   51196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 19:05:34.671180   51196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 19:05:34.671195   51196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/multinode-269079/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 19:05:34.671213   51196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/multinode-269079/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 19:05:34.671228   51196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/multinode-269079/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 19:05:34.671246   51196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/multinode-269079/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 19:05:34.671317   51196 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 19:05:34.671348   51196 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 19:05:34.671360   51196 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 19:05:34.671386   51196 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 19:05:34.671411   51196 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 19:05:34.671434   51196 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 19:05:34.671477   51196 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:05:34.671506   51196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> /usr/share/ca-certificates/220282.pem
	I0719 19:05:34.671521   51196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:05:34.671535   51196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem -> /usr/share/ca-certificates/22028.pem
	I0719 19:05:34.672118   51196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 19:05:34.695433   51196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 19:05:34.717165   51196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 19:05:34.738855   51196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 19:05:34.760014   51196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/multinode-269079/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0719 19:05:34.780762   51196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/multinode-269079/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 19:05:34.802183   51196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/multinode-269079/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 19:05:34.824134   51196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/multinode-269079/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 19:05:34.845819   51196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 19:05:34.867630   51196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 19:05:34.889587   51196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 19:05:34.910521   51196 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 19:05:34.925856   51196 ssh_runner.go:195] Run: openssl version
	I0719 19:05:34.931364   51196 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0719 19:05:34.931422   51196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 19:05:34.941168   51196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 19:05:34.945216   51196 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 19:05:34.945240   51196 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 19:05:34.945277   51196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 19:05:34.950366   51196 command_runner.go:130] > 3ec20f2e
	I0719 19:05:34.950547   51196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 19:05:34.958942   51196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 19:05:34.968449   51196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:05:34.972339   51196 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:05:34.972477   51196 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:05:34.972527   51196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:05:34.977532   51196 command_runner.go:130] > b5213941
	I0719 19:05:34.977575   51196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 19:05:34.986187   51196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 19:05:34.995695   51196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 19:05:34.999579   51196 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 19:05:34.999598   51196 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 19:05:34.999628   51196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 19:05:35.004815   51196 command_runner.go:130] > 51391683
	I0719 19:05:35.004878   51196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 19:05:35.013185   51196 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 19:05:35.017242   51196 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 19:05:35.017266   51196 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0719 19:05:35.017275   51196 command_runner.go:130] > Device: 253,1	Inode: 9433131     Links: 1
	I0719 19:05:35.017286   51196 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0719 19:05:35.017295   51196 command_runner.go:130] > Access: 2024-07-19 18:58:43.710879364 +0000
	I0719 19:05:35.017306   51196 command_runner.go:130] > Modify: 2024-07-19 18:58:43.710879364 +0000
	I0719 19:05:35.017315   51196 command_runner.go:130] > Change: 2024-07-19 18:58:43.710879364 +0000
	I0719 19:05:35.017325   51196 command_runner.go:130] >  Birth: 2024-07-19 18:58:43.710879364 +0000
	I0719 19:05:35.017376   51196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 19:05:35.022462   51196 command_runner.go:130] > Certificate will not expire
	I0719 19:05:35.022506   51196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 19:05:35.037015   51196 command_runner.go:130] > Certificate will not expire
	I0719 19:05:35.037481   51196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 19:05:35.050872   51196 command_runner.go:130] > Certificate will not expire
	I0719 19:05:35.051242   51196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 19:05:35.056644   51196 command_runner.go:130] > Certificate will not expire
	I0719 19:05:35.056705   51196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 19:05:35.062077   51196 command_runner.go:130] > Certificate will not expire
	I0719 19:05:35.062214   51196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 19:05:35.067856   51196 command_runner.go:130] > Certificate will not expire
	I0719 19:05:35.067944   51196 kubeadm.go:392] StartCluster: {Name:multinode-269079 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-269079 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.165 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:05:35.068080   51196 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 19:05:35.068133   51196 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:05:35.103443   51196 command_runner.go:130] > a8482fd072f2f3c18b107fac23e85f1f42e11a3d9ea24051b7ea40e5174964c6
	I0719 19:05:35.103469   51196 command_runner.go:130] > 49a43bbb0873d11feeb5dc4abc6e9eb8eebbab12d60c776484c97aee8ad8c955
	I0719 19:05:35.103476   51196 command_runner.go:130] > e75ab08afcd7afa95ec628bc9bf04c07e01d635090c69e2bb19057e80bd5ff66
	I0719 19:05:35.103485   51196 command_runner.go:130] > e71f310f55a559649349c09e79956008833bf0af02c07e8a146bb6869cef652e
	I0719 19:05:35.103494   51196 command_runner.go:130] > 68f992a866a01221fd939d28856ca7ec579717b7fe20a4c90e583139e69f1c48
	I0719 19:05:35.103501   51196 command_runner.go:130] > 64edcccff1ef0315c913f1b9f8faf0f09b13456c8de36295b52578d5fde4ac43
	I0719 19:05:35.103510   51196 command_runner.go:130] > 321b6c480a3713de53fd2e1bc97ebd21ede0deb84be28bc71b31c95c9496e5a0
	I0719 19:05:35.103522   51196 command_runner.go:130] > d45496ab592ce2cd7517e1885f2249bea0a1add7d091cdd9e5519ce84c40c831
	I0719 19:05:35.103549   51196 cri.go:89] found id: "a8482fd072f2f3c18b107fac23e85f1f42e11a3d9ea24051b7ea40e5174964c6"
	I0719 19:05:35.103560   51196 cri.go:89] found id: "49a43bbb0873d11feeb5dc4abc6e9eb8eebbab12d60c776484c97aee8ad8c955"
	I0719 19:05:35.103565   51196 cri.go:89] found id: "e75ab08afcd7afa95ec628bc9bf04c07e01d635090c69e2bb19057e80bd5ff66"
	I0719 19:05:35.103572   51196 cri.go:89] found id: "e71f310f55a559649349c09e79956008833bf0af02c07e8a146bb6869cef652e"
	I0719 19:05:35.103576   51196 cri.go:89] found id: "68f992a866a01221fd939d28856ca7ec579717b7fe20a4c90e583139e69f1c48"
	I0719 19:05:35.103580   51196 cri.go:89] found id: "64edcccff1ef0315c913f1b9f8faf0f09b13456c8de36295b52578d5fde4ac43"
	I0719 19:05:35.103584   51196 cri.go:89] found id: "321b6c480a3713de53fd2e1bc97ebd21ede0deb84be28bc71b31c95c9496e5a0"
	I0719 19:05:35.103591   51196 cri.go:89] found id: "d45496ab592ce2cd7517e1885f2249bea0a1add7d091cdd9e5519ce84c40c831"
	I0719 19:05:35.103595   51196 cri.go:89] found id: ""
	I0719 19:05:35.103647   51196 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 19 19:07:21 multinode-269079 crio[2873]: time="2024-07-19 19:07:21.532917599Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721416041532895770,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ab7d028d-77bd-4b6a-8993-4a9cea34a44d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:07:21 multinode-269079 crio[2873]: time="2024-07-19 19:07:21.533451669Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3ee6c5fe-1259-4d4a-8359-3105124100fa name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:07:21 multinode-269079 crio[2873]: time="2024-07-19 19:07:21.533530595Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3ee6c5fe-1259-4d4a-8359-3105124100fa name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:07:21 multinode-269079 crio[2873]: time="2024-07-19 19:07:21.533876920Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:894bd871a6c3fea5bb7630c7a3b093b521388e780b8f3a2bba951a32aba8f85b,PodSandboxId:9d8f06d1f03f5a0f5b3531cfc0a486eacb620b5f2947dbf822900336c4d8e259,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721415975098398597,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6fcln,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f6cd030c-4433-4384-afb6-224e9298f440,},Annotations:map[string]string{io.kubernetes.container.hash: cc020fff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46fc9907bacdb8a8cfa273acb3043b189442ae850273319b4ace7655cbcde8ad,PodSandboxId:c571d80547cde7518ba0b559c4e91330636429a6fc0e1b042bc6c981a915dcbb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721415941585538619,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k2tw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03ec807-2f25-4220-aade-72b3b186cbff,},Annotations:map[string]string{io.kubernetes.container.hash: e2fb1063,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e533ceba794b58bc3b02f0a58da4e04f4ce86956f27cdae2e26db00006a10614,PodSandboxId:1601505f0629c7483a924f99b2e108d25deec7589b71610152dc0039f6eb9659,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721415941386512179,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xccsr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe26d277-d2d1-4f90-b959-f5c5cab1e3b4,},Annotations:map[string]string{io.kubernetes.container.hash: 96c51d41,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9c6c04f5ffc6f44009fbe8f9b51c78a11ff11509217a279a52515717be2092,PodSandboxId:62d16e85274e126e42ae27cbbfd24257bcc929dcdc4c5b92954c398c62b44411,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721415941374820723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bcd5d04-5e58-4c45-b920-3e5c914cb20a,},An
notations:map[string]string{io.kubernetes.container.hash: e1cf223e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5f5e7486a1a2d48f6df68e16b9fb442d5f76c20906fbef9f20a2126380085e0,PodSandboxId:8bf0f78fac48b4396a533326e9a24e9c889e8a65867250c233eab8ecc4b7aa7a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721415941330344981,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sthg6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26619429-a87d-4b20-859c-ccefabcd5d0c,},Annotations:map[string]string{io.ku
bernetes.container.hash: 51979c36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06cf7dbdff6300c5e36fdabbbedbe294040c0b9f43818b301d3fb996c076e525,PodSandboxId:6db352d6aa0718baeeca18fc47bdf5cc124bc072dd6e602f808d7e8ef051878e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721415937553462604,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb76a4932ccceb91526779cec095f7c5,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d157f505891344f357ec23eb8fe312255a86a35c7b54c8ee387ccf0c9128b13,PodSandboxId:36e84396da070a006026129d11038693d523fbe32a892930c97e84402462fddb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721415937516605499,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85949570dee989a82fdcd94cbe491c22,},Annotations:map[string]string{io.kubernetes.container.hash: 9cb43d85,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94bbb6dcaa9e8481e24ce8e895f693b8120a5acc59ce7a192da7a5a743d080de,PodSandboxId:02664b942fba6a984f140de0b3e18326684e125bebfbd01bcb7fc7dd87b8b4d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721415937463895262,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39e57bc97afce52cf75404d8daa3ce28,},Annotations:map[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbead3e675296f48b31a6bbd35249117e32374c47b3f82855779e42234a4d62,PodSandboxId:b76dd8c5ae47ec5a1de1ba37ba5bab60b99ca95c57cde30f4ad5bd1b8786d173,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721415937453892928,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1f2459a592675837a715c978418f030,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d14d23b67d466906ae5bd294307038c92e6a674fb5f9b43793d59fc077dc281c,PodSandboxId:d1c000b9a2b625f9122b2809d835703e0199ed4ed7083527adb3708c8cb284d4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721415615609057770,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6fcln,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f6cd030c-4433-4384-afb6-224e9298f440,},Annotations:map[string]string{io.kubernetes.container.hash: cc020fff,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8482fd072f2f3c18b107fac23e85f1f42e11a3d9ea24051b7ea40e5174964c6,PodSandboxId:ffd82047fa6c05e1020361333fb66b324993289ead78cdd33c8591fa632fc334,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721415563967720756,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xccsr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe26d277-d2d1-4f90-b959-f5c5cab1e3b4,},Annotations:map[string]string{io.kubernetes.container.hash: 96c51d41,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49a43bbb0873d11feeb5dc4abc6e9eb8eebbab12d60c776484c97aee8ad8c955,PodSandboxId:74e7c4e8e9b2407ad33cbc60830967321f7b16a6970003397f38fd84d6897f5c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721415562394397035,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 0bcd5d04-5e58-4c45-b920-3e5c914cb20a,},Annotations:map[string]string{io.kubernetes.container.hash: e1cf223e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e75ab08afcd7afa95ec628bc9bf04c07e01d635090c69e2bb19057e80bd5ff66,PodSandboxId:1047c18ae3d8fd2df34386adf401628e88acc3c6cbd6ed5007f1a4d19540a20b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721415550592022676,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k2tw6,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: c03ec807-2f25-4220-aade-72b3b186cbff,},Annotations:map[string]string{io.kubernetes.container.hash: e2fb1063,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e71f310f55a559649349c09e79956008833bf0af02c07e8a146bb6869cef652e,PodSandboxId:f2e597e3254d2290e71c52298d51e2766620f18fde0eab0f31932b24f512db95,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721415547342755584,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sthg6,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 26619429-a87d-4b20-859c-ccefabcd5d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 51979c36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68f992a866a01221fd939d28856ca7ec579717b7fe20a4c90e583139e69f1c48,PodSandboxId:340f9bce574ea19f864a36e20ff422006bfd1deb41b8a84713f1df57167af056,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721415527382695535,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85949570dee989a82fdcd94cbe491c
22,},Annotations:map[string]string{io.kubernetes.container.hash: 9cb43d85,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64edcccff1ef0315c913f1b9f8faf0f09b13456c8de36295b52578d5fde4ac43,PodSandboxId:a13fee91701e93e490f3cc3d5d0860972fb91c4d6049309fc0866ce8b54c0db3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721415527368954252,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb76a4932ccceb91526779cec095f7c5,},Annotation
s:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321b6c480a3713de53fd2e1bc97ebd21ede0deb84be28bc71b31c95c9496e5a0,PodSandboxId:7a6e0f8671b91f2a441b89dfbd988618dfd444d45ec9b1232013550e9a3d5295,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721415527339802479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1f2459a592675837a715c978418f030,
},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d45496ab592ce2cd7517e1885f2249bea0a1add7d091cdd9e5519ce84c40c831,PodSandboxId:8d8e7cba76b9224d692f705da3e4ec80fe3f8771027d2425cfbebc9289031ff2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721415527324140974,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39e57bc97afce52cf75404d8daa3ce28,},Annotations:m
ap[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3ee6c5fe-1259-4d4a-8359-3105124100fa name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:07:21 multinode-269079 crio[2873]: time="2024-07-19 19:07:21.573498463Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=615a50ee-70ef-4c4e-b732-3ded0ae7e5dd name=/runtime.v1.RuntimeService/Version
	Jul 19 19:07:21 multinode-269079 crio[2873]: time="2024-07-19 19:07:21.573585551Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=615a50ee-70ef-4c4e-b732-3ded0ae7e5dd name=/runtime.v1.RuntimeService/Version
	Jul 19 19:07:21 multinode-269079 crio[2873]: time="2024-07-19 19:07:21.575066742Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=731e2bf1-2091-4b74-b0c0-08437f212648 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:07:21 multinode-269079 crio[2873]: time="2024-07-19 19:07:21.575550222Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721416041575526723,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=731e2bf1-2091-4b74-b0c0-08437f212648 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:07:21 multinode-269079 crio[2873]: time="2024-07-19 19:07:21.576562722Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=01177208-5166-4270-9744-45c08287b1f7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:07:21 multinode-269079 crio[2873]: time="2024-07-19 19:07:21.576622301Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=01177208-5166-4270-9744-45c08287b1f7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:07:21 multinode-269079 crio[2873]: time="2024-07-19 19:07:21.576970376Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:894bd871a6c3fea5bb7630c7a3b093b521388e780b8f3a2bba951a32aba8f85b,PodSandboxId:9d8f06d1f03f5a0f5b3531cfc0a486eacb620b5f2947dbf822900336c4d8e259,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721415975098398597,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6fcln,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f6cd030c-4433-4384-afb6-224e9298f440,},Annotations:map[string]string{io.kubernetes.container.hash: cc020fff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46fc9907bacdb8a8cfa273acb3043b189442ae850273319b4ace7655cbcde8ad,PodSandboxId:c571d80547cde7518ba0b559c4e91330636429a6fc0e1b042bc6c981a915dcbb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721415941585538619,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k2tw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03ec807-2f25-4220-aade-72b3b186cbff,},Annotations:map[string]string{io.kubernetes.container.hash: e2fb1063,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e533ceba794b58bc3b02f0a58da4e04f4ce86956f27cdae2e26db00006a10614,PodSandboxId:1601505f0629c7483a924f99b2e108d25deec7589b71610152dc0039f6eb9659,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721415941386512179,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xccsr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe26d277-d2d1-4f90-b959-f5c5cab1e3b4,},Annotations:map[string]string{io.kubernetes.container.hash: 96c51d41,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9c6c04f5ffc6f44009fbe8f9b51c78a11ff11509217a279a52515717be2092,PodSandboxId:62d16e85274e126e42ae27cbbfd24257bcc929dcdc4c5b92954c398c62b44411,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721415941374820723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bcd5d04-5e58-4c45-b920-3e5c914cb20a,},An
notations:map[string]string{io.kubernetes.container.hash: e1cf223e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5f5e7486a1a2d48f6df68e16b9fb442d5f76c20906fbef9f20a2126380085e0,PodSandboxId:8bf0f78fac48b4396a533326e9a24e9c889e8a65867250c233eab8ecc4b7aa7a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721415941330344981,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sthg6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26619429-a87d-4b20-859c-ccefabcd5d0c,},Annotations:map[string]string{io.ku
bernetes.container.hash: 51979c36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06cf7dbdff6300c5e36fdabbbedbe294040c0b9f43818b301d3fb996c076e525,PodSandboxId:6db352d6aa0718baeeca18fc47bdf5cc124bc072dd6e602f808d7e8ef051878e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721415937553462604,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb76a4932ccceb91526779cec095f7c5,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d157f505891344f357ec23eb8fe312255a86a35c7b54c8ee387ccf0c9128b13,PodSandboxId:36e84396da070a006026129d11038693d523fbe32a892930c97e84402462fddb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721415937516605499,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85949570dee989a82fdcd94cbe491c22,},Annotations:map[string]string{io.kubernetes.container.hash: 9cb43d85,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94bbb6dcaa9e8481e24ce8e895f693b8120a5acc59ce7a192da7a5a743d080de,PodSandboxId:02664b942fba6a984f140de0b3e18326684e125bebfbd01bcb7fc7dd87b8b4d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721415937463895262,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39e57bc97afce52cf75404d8daa3ce28,},Annotations:map[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbead3e675296f48b31a6bbd35249117e32374c47b3f82855779e42234a4d62,PodSandboxId:b76dd8c5ae47ec5a1de1ba37ba5bab60b99ca95c57cde30f4ad5bd1b8786d173,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721415937453892928,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1f2459a592675837a715c978418f030,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d14d23b67d466906ae5bd294307038c92e6a674fb5f9b43793d59fc077dc281c,PodSandboxId:d1c000b9a2b625f9122b2809d835703e0199ed4ed7083527adb3708c8cb284d4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721415615609057770,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6fcln,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f6cd030c-4433-4384-afb6-224e9298f440,},Annotations:map[string]string{io.kubernetes.container.hash: cc020fff,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8482fd072f2f3c18b107fac23e85f1f42e11a3d9ea24051b7ea40e5174964c6,PodSandboxId:ffd82047fa6c05e1020361333fb66b324993289ead78cdd33c8591fa632fc334,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721415563967720756,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xccsr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe26d277-d2d1-4f90-b959-f5c5cab1e3b4,},Annotations:map[string]string{io.kubernetes.container.hash: 96c51d41,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49a43bbb0873d11feeb5dc4abc6e9eb8eebbab12d60c776484c97aee8ad8c955,PodSandboxId:74e7c4e8e9b2407ad33cbc60830967321f7b16a6970003397f38fd84d6897f5c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721415562394397035,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 0bcd5d04-5e58-4c45-b920-3e5c914cb20a,},Annotations:map[string]string{io.kubernetes.container.hash: e1cf223e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e75ab08afcd7afa95ec628bc9bf04c07e01d635090c69e2bb19057e80bd5ff66,PodSandboxId:1047c18ae3d8fd2df34386adf401628e88acc3c6cbd6ed5007f1a4d19540a20b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721415550592022676,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k2tw6,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: c03ec807-2f25-4220-aade-72b3b186cbff,},Annotations:map[string]string{io.kubernetes.container.hash: e2fb1063,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e71f310f55a559649349c09e79956008833bf0af02c07e8a146bb6869cef652e,PodSandboxId:f2e597e3254d2290e71c52298d51e2766620f18fde0eab0f31932b24f512db95,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721415547342755584,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sthg6,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 26619429-a87d-4b20-859c-ccefabcd5d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 51979c36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68f992a866a01221fd939d28856ca7ec579717b7fe20a4c90e583139e69f1c48,PodSandboxId:340f9bce574ea19f864a36e20ff422006bfd1deb41b8a84713f1df57167af056,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721415527382695535,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85949570dee989a82fdcd94cbe491c
22,},Annotations:map[string]string{io.kubernetes.container.hash: 9cb43d85,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64edcccff1ef0315c913f1b9f8faf0f09b13456c8de36295b52578d5fde4ac43,PodSandboxId:a13fee91701e93e490f3cc3d5d0860972fb91c4d6049309fc0866ce8b54c0db3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721415527368954252,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb76a4932ccceb91526779cec095f7c5,},Annotation
s:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321b6c480a3713de53fd2e1bc97ebd21ede0deb84be28bc71b31c95c9496e5a0,PodSandboxId:7a6e0f8671b91f2a441b89dfbd988618dfd444d45ec9b1232013550e9a3d5295,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721415527339802479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1f2459a592675837a715c978418f030,
},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d45496ab592ce2cd7517e1885f2249bea0a1add7d091cdd9e5519ce84c40c831,PodSandboxId:8d8e7cba76b9224d692f705da3e4ec80fe3f8771027d2425cfbebc9289031ff2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721415527324140974,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39e57bc97afce52cf75404d8daa3ce28,},Annotations:m
ap[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=01177208-5166-4270-9744-45c08287b1f7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:07:21 multinode-269079 crio[2873]: time="2024-07-19 19:07:21.616904461Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9113a737-ea5f-4c6e-acee-ac4b671ca939 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:07:21 multinode-269079 crio[2873]: time="2024-07-19 19:07:21.616993697Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9113a737-ea5f-4c6e-acee-ac4b671ca939 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:07:21 multinode-269079 crio[2873]: time="2024-07-19 19:07:21.618103229Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=79019a41-74cc-41c8-84d0-42b2116992d5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:07:21 multinode-269079 crio[2873]: time="2024-07-19 19:07:21.618770794Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721416041618744166,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=79019a41-74cc-41c8-84d0-42b2116992d5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:07:21 multinode-269079 crio[2873]: time="2024-07-19 19:07:21.619254621Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=14bb55c1-6fdf-4f2c-8977-55a5f454ca09 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:07:21 multinode-269079 crio[2873]: time="2024-07-19 19:07:21.619318685Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=14bb55c1-6fdf-4f2c-8977-55a5f454ca09 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:07:21 multinode-269079 crio[2873]: time="2024-07-19 19:07:21.619650762Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:894bd871a6c3fea5bb7630c7a3b093b521388e780b8f3a2bba951a32aba8f85b,PodSandboxId:9d8f06d1f03f5a0f5b3531cfc0a486eacb620b5f2947dbf822900336c4d8e259,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721415975098398597,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6fcln,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f6cd030c-4433-4384-afb6-224e9298f440,},Annotations:map[string]string{io.kubernetes.container.hash: cc020fff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46fc9907bacdb8a8cfa273acb3043b189442ae850273319b4ace7655cbcde8ad,PodSandboxId:c571d80547cde7518ba0b559c4e91330636429a6fc0e1b042bc6c981a915dcbb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721415941585538619,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k2tw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03ec807-2f25-4220-aade-72b3b186cbff,},Annotations:map[string]string{io.kubernetes.container.hash: e2fb1063,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e533ceba794b58bc3b02f0a58da4e04f4ce86956f27cdae2e26db00006a10614,PodSandboxId:1601505f0629c7483a924f99b2e108d25deec7589b71610152dc0039f6eb9659,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721415941386512179,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xccsr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe26d277-d2d1-4f90-b959-f5c5cab1e3b4,},Annotations:map[string]string{io.kubernetes.container.hash: 96c51d41,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9c6c04f5ffc6f44009fbe8f9b51c78a11ff11509217a279a52515717be2092,PodSandboxId:62d16e85274e126e42ae27cbbfd24257bcc929dcdc4c5b92954c398c62b44411,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721415941374820723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bcd5d04-5e58-4c45-b920-3e5c914cb20a,},An
notations:map[string]string{io.kubernetes.container.hash: e1cf223e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5f5e7486a1a2d48f6df68e16b9fb442d5f76c20906fbef9f20a2126380085e0,PodSandboxId:8bf0f78fac48b4396a533326e9a24e9c889e8a65867250c233eab8ecc4b7aa7a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721415941330344981,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sthg6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26619429-a87d-4b20-859c-ccefabcd5d0c,},Annotations:map[string]string{io.ku
bernetes.container.hash: 51979c36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06cf7dbdff6300c5e36fdabbbedbe294040c0b9f43818b301d3fb996c076e525,PodSandboxId:6db352d6aa0718baeeca18fc47bdf5cc124bc072dd6e602f808d7e8ef051878e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721415937553462604,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb76a4932ccceb91526779cec095f7c5,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d157f505891344f357ec23eb8fe312255a86a35c7b54c8ee387ccf0c9128b13,PodSandboxId:36e84396da070a006026129d11038693d523fbe32a892930c97e84402462fddb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721415937516605499,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85949570dee989a82fdcd94cbe491c22,},Annotations:map[string]string{io.kubernetes.container.hash: 9cb43d85,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94bbb6dcaa9e8481e24ce8e895f693b8120a5acc59ce7a192da7a5a743d080de,PodSandboxId:02664b942fba6a984f140de0b3e18326684e125bebfbd01bcb7fc7dd87b8b4d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721415937463895262,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39e57bc97afce52cf75404d8daa3ce28,},Annotations:map[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbead3e675296f48b31a6bbd35249117e32374c47b3f82855779e42234a4d62,PodSandboxId:b76dd8c5ae47ec5a1de1ba37ba5bab60b99ca95c57cde30f4ad5bd1b8786d173,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721415937453892928,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1f2459a592675837a715c978418f030,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d14d23b67d466906ae5bd294307038c92e6a674fb5f9b43793d59fc077dc281c,PodSandboxId:d1c000b9a2b625f9122b2809d835703e0199ed4ed7083527adb3708c8cb284d4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721415615609057770,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6fcln,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f6cd030c-4433-4384-afb6-224e9298f440,},Annotations:map[string]string{io.kubernetes.container.hash: cc020fff,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8482fd072f2f3c18b107fac23e85f1f42e11a3d9ea24051b7ea40e5174964c6,PodSandboxId:ffd82047fa6c05e1020361333fb66b324993289ead78cdd33c8591fa632fc334,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721415563967720756,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xccsr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe26d277-d2d1-4f90-b959-f5c5cab1e3b4,},Annotations:map[string]string{io.kubernetes.container.hash: 96c51d41,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49a43bbb0873d11feeb5dc4abc6e9eb8eebbab12d60c776484c97aee8ad8c955,PodSandboxId:74e7c4e8e9b2407ad33cbc60830967321f7b16a6970003397f38fd84d6897f5c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721415562394397035,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 0bcd5d04-5e58-4c45-b920-3e5c914cb20a,},Annotations:map[string]string{io.kubernetes.container.hash: e1cf223e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e75ab08afcd7afa95ec628bc9bf04c07e01d635090c69e2bb19057e80bd5ff66,PodSandboxId:1047c18ae3d8fd2df34386adf401628e88acc3c6cbd6ed5007f1a4d19540a20b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721415550592022676,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k2tw6,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: c03ec807-2f25-4220-aade-72b3b186cbff,},Annotations:map[string]string{io.kubernetes.container.hash: e2fb1063,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e71f310f55a559649349c09e79956008833bf0af02c07e8a146bb6869cef652e,PodSandboxId:f2e597e3254d2290e71c52298d51e2766620f18fde0eab0f31932b24f512db95,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721415547342755584,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sthg6,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 26619429-a87d-4b20-859c-ccefabcd5d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 51979c36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68f992a866a01221fd939d28856ca7ec579717b7fe20a4c90e583139e69f1c48,PodSandboxId:340f9bce574ea19f864a36e20ff422006bfd1deb41b8a84713f1df57167af056,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721415527382695535,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85949570dee989a82fdcd94cbe491c
22,},Annotations:map[string]string{io.kubernetes.container.hash: 9cb43d85,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64edcccff1ef0315c913f1b9f8faf0f09b13456c8de36295b52578d5fde4ac43,PodSandboxId:a13fee91701e93e490f3cc3d5d0860972fb91c4d6049309fc0866ce8b54c0db3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721415527368954252,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb76a4932ccceb91526779cec095f7c5,},Annotation
s:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321b6c480a3713de53fd2e1bc97ebd21ede0deb84be28bc71b31c95c9496e5a0,PodSandboxId:7a6e0f8671b91f2a441b89dfbd988618dfd444d45ec9b1232013550e9a3d5295,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721415527339802479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1f2459a592675837a715c978418f030,
},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d45496ab592ce2cd7517e1885f2249bea0a1add7d091cdd9e5519ce84c40c831,PodSandboxId:8d8e7cba76b9224d692f705da3e4ec80fe3f8771027d2425cfbebc9289031ff2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721415527324140974,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39e57bc97afce52cf75404d8daa3ce28,},Annotations:m
ap[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=14bb55c1-6fdf-4f2c-8977-55a5f454ca09 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:07:21 multinode-269079 crio[2873]: time="2024-07-19 19:07:21.660119044Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5e5ba060-44fe-4dc5-9629-f7ce01933a57 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:07:21 multinode-269079 crio[2873]: time="2024-07-19 19:07:21.660244167Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5e5ba060-44fe-4dc5-9629-f7ce01933a57 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:07:21 multinode-269079 crio[2873]: time="2024-07-19 19:07:21.661310771Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dee29313-7fd2-4e8b-abb9-a42f9d3f5267 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:07:21 multinode-269079 crio[2873]: time="2024-07-19 19:07:21.661726768Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721416041661706346,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dee29313-7fd2-4e8b-abb9-a42f9d3f5267 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:07:21 multinode-269079 crio[2873]: time="2024-07-19 19:07:21.662332780Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=13bc232e-a6f8-4b74-baa8-de15fcca521b name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:07:21 multinode-269079 crio[2873]: time="2024-07-19 19:07:21.662401392Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=13bc232e-a6f8-4b74-baa8-de15fcca521b name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:07:21 multinode-269079 crio[2873]: time="2024-07-19 19:07:21.662763510Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:894bd871a6c3fea5bb7630c7a3b093b521388e780b8f3a2bba951a32aba8f85b,PodSandboxId:9d8f06d1f03f5a0f5b3531cfc0a486eacb620b5f2947dbf822900336c4d8e259,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721415975098398597,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6fcln,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f6cd030c-4433-4384-afb6-224e9298f440,},Annotations:map[string]string{io.kubernetes.container.hash: cc020fff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46fc9907bacdb8a8cfa273acb3043b189442ae850273319b4ace7655cbcde8ad,PodSandboxId:c571d80547cde7518ba0b559c4e91330636429a6fc0e1b042bc6c981a915dcbb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721415941585538619,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k2tw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03ec807-2f25-4220-aade-72b3b186cbff,},Annotations:map[string]string{io.kubernetes.container.hash: e2fb1063,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e533ceba794b58bc3b02f0a58da4e04f4ce86956f27cdae2e26db00006a10614,PodSandboxId:1601505f0629c7483a924f99b2e108d25deec7589b71610152dc0039f6eb9659,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721415941386512179,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xccsr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe26d277-d2d1-4f90-b959-f5c5cab1e3b4,},Annotations:map[string]string{io.kubernetes.container.hash: 96c51d41,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9c6c04f5ffc6f44009fbe8f9b51c78a11ff11509217a279a52515717be2092,PodSandboxId:62d16e85274e126e42ae27cbbfd24257bcc929dcdc4c5b92954c398c62b44411,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721415941374820723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bcd5d04-5e58-4c45-b920-3e5c914cb20a,},An
notations:map[string]string{io.kubernetes.container.hash: e1cf223e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5f5e7486a1a2d48f6df68e16b9fb442d5f76c20906fbef9f20a2126380085e0,PodSandboxId:8bf0f78fac48b4396a533326e9a24e9c889e8a65867250c233eab8ecc4b7aa7a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721415941330344981,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sthg6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26619429-a87d-4b20-859c-ccefabcd5d0c,},Annotations:map[string]string{io.ku
bernetes.container.hash: 51979c36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06cf7dbdff6300c5e36fdabbbedbe294040c0b9f43818b301d3fb996c076e525,PodSandboxId:6db352d6aa0718baeeca18fc47bdf5cc124bc072dd6e602f808d7e8ef051878e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721415937553462604,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb76a4932ccceb91526779cec095f7c5,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d157f505891344f357ec23eb8fe312255a86a35c7b54c8ee387ccf0c9128b13,PodSandboxId:36e84396da070a006026129d11038693d523fbe32a892930c97e84402462fddb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721415937516605499,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85949570dee989a82fdcd94cbe491c22,},Annotations:map[string]string{io.kubernetes.container.hash: 9cb43d85,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94bbb6dcaa9e8481e24ce8e895f693b8120a5acc59ce7a192da7a5a743d080de,PodSandboxId:02664b942fba6a984f140de0b3e18326684e125bebfbd01bcb7fc7dd87b8b4d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721415937463895262,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39e57bc97afce52cf75404d8daa3ce28,},Annotations:map[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbead3e675296f48b31a6bbd35249117e32374c47b3f82855779e42234a4d62,PodSandboxId:b76dd8c5ae47ec5a1de1ba37ba5bab60b99ca95c57cde30f4ad5bd1b8786d173,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721415937453892928,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1f2459a592675837a715c978418f030,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d14d23b67d466906ae5bd294307038c92e6a674fb5f9b43793d59fc077dc281c,PodSandboxId:d1c000b9a2b625f9122b2809d835703e0199ed4ed7083527adb3708c8cb284d4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721415615609057770,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6fcln,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f6cd030c-4433-4384-afb6-224e9298f440,},Annotations:map[string]string{io.kubernetes.container.hash: cc020fff,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8482fd072f2f3c18b107fac23e85f1f42e11a3d9ea24051b7ea40e5174964c6,PodSandboxId:ffd82047fa6c05e1020361333fb66b324993289ead78cdd33c8591fa632fc334,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721415563967720756,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xccsr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe26d277-d2d1-4f90-b959-f5c5cab1e3b4,},Annotations:map[string]string{io.kubernetes.container.hash: 96c51d41,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49a43bbb0873d11feeb5dc4abc6e9eb8eebbab12d60c776484c97aee8ad8c955,PodSandboxId:74e7c4e8e9b2407ad33cbc60830967321f7b16a6970003397f38fd84d6897f5c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721415562394397035,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 0bcd5d04-5e58-4c45-b920-3e5c914cb20a,},Annotations:map[string]string{io.kubernetes.container.hash: e1cf223e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e75ab08afcd7afa95ec628bc9bf04c07e01d635090c69e2bb19057e80bd5ff66,PodSandboxId:1047c18ae3d8fd2df34386adf401628e88acc3c6cbd6ed5007f1a4d19540a20b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721415550592022676,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k2tw6,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: c03ec807-2f25-4220-aade-72b3b186cbff,},Annotations:map[string]string{io.kubernetes.container.hash: e2fb1063,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e71f310f55a559649349c09e79956008833bf0af02c07e8a146bb6869cef652e,PodSandboxId:f2e597e3254d2290e71c52298d51e2766620f18fde0eab0f31932b24f512db95,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721415547342755584,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sthg6,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 26619429-a87d-4b20-859c-ccefabcd5d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 51979c36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68f992a866a01221fd939d28856ca7ec579717b7fe20a4c90e583139e69f1c48,PodSandboxId:340f9bce574ea19f864a36e20ff422006bfd1deb41b8a84713f1df57167af056,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721415527382695535,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85949570dee989a82fdcd94cbe491c
22,},Annotations:map[string]string{io.kubernetes.container.hash: 9cb43d85,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64edcccff1ef0315c913f1b9f8faf0f09b13456c8de36295b52578d5fde4ac43,PodSandboxId:a13fee91701e93e490f3cc3d5d0860972fb91c4d6049309fc0866ce8b54c0db3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721415527368954252,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb76a4932ccceb91526779cec095f7c5,},Annotation
s:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321b6c480a3713de53fd2e1bc97ebd21ede0deb84be28bc71b31c95c9496e5a0,PodSandboxId:7a6e0f8671b91f2a441b89dfbd988618dfd444d45ec9b1232013550e9a3d5295,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721415527339802479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1f2459a592675837a715c978418f030,
},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d45496ab592ce2cd7517e1885f2249bea0a1add7d091cdd9e5519ce84c40c831,PodSandboxId:8d8e7cba76b9224d692f705da3e4ec80fe3f8771027d2425cfbebc9289031ff2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721415527324140974,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39e57bc97afce52cf75404d8daa3ce28,},Annotations:m
ap[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=13bc232e-a6f8-4b74-baa8-de15fcca521b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	894bd871a6c3f       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   9d8f06d1f03f5       busybox-fc5497c4f-6fcln
	46fc9907bacdb       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      About a minute ago   Running             kindnet-cni               1                   c571d80547cde       kindnet-k2tw6
	e533ceba794b5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   1601505f0629c       coredns-7db6d8ff4d-xccsr
	2a9c6c04f5ffc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   62d16e85274e1       storage-provisioner
	d5f5e7486a1a2       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      About a minute ago   Running             kube-proxy                1                   8bf0f78fac48b       kube-proxy-sthg6
	06cf7dbdff630       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      About a minute ago   Running             kube-scheduler            1                   6db352d6aa071       kube-scheduler-multinode-269079
	1d157f5058913       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   36e84396da070       etcd-multinode-269079
	94bbb6dcaa9e8       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            1                   02664b942fba6       kube-apiserver-multinode-269079
	3fbead3e67529       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   1                   b76dd8c5ae47e       kube-controller-manager-multinode-269079
	d14d23b67d466       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   d1c000b9a2b62       busybox-fc5497c4f-6fcln
	a8482fd072f2f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   ffd82047fa6c0       coredns-7db6d8ff4d-xccsr
	49a43bbb0873d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   74e7c4e8e9b24       storage-provisioner
	e75ab08afcd7a       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    8 minutes ago        Exited              kindnet-cni               0                   1047c18ae3d8f       kindnet-k2tw6
	e71f310f55a55       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      8 minutes ago        Exited              kube-proxy                0                   f2e597e3254d2       kube-proxy-sthg6
	68f992a866a01       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago        Exited              etcd                      0                   340f9bce574ea       etcd-multinode-269079
	64edcccff1ef0       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      8 minutes ago        Exited              kube-scheduler            0                   a13fee91701e9       kube-scheduler-multinode-269079
	321b6c480a371       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago        Exited              kube-controller-manager   0                   7a6e0f8671b91       kube-controller-manager-multinode-269079
	d45496ab592ce       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      8 minutes ago        Exited              kube-apiserver            0                   8d8e7cba76b92       kube-apiserver-multinode-269079
	
	
	==> coredns [a8482fd072f2f3c18b107fac23e85f1f42e11a3d9ea24051b7ea40e5174964c6] <==
	[INFO] 10.244.1.2:56033 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001743583s
	[INFO] 10.244.1.2:35255 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099513s
	[INFO] 10.244.1.2:52053 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081038s
	[INFO] 10.244.1.2:55400 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00101875s
	[INFO] 10.244.1.2:51083 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000069199s
	[INFO] 10.244.1.2:54959 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000086704s
	[INFO] 10.244.1.2:45609 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075s
	[INFO] 10.244.0.3:41084 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106625s
	[INFO] 10.244.0.3:34425 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000059884s
	[INFO] 10.244.0.3:46778 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000040101s
	[INFO] 10.244.0.3:59356 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000043828s
	[INFO] 10.244.1.2:40199 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013749s
	[INFO] 10.244.1.2:44693 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097438s
	[INFO] 10.244.1.2:52337 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087198s
	[INFO] 10.244.1.2:55280 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075693s
	[INFO] 10.244.0.3:47207 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000072508s
	[INFO] 10.244.0.3:57558 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000118504s
	[INFO] 10.244.0.3:33453 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00006221s
	[INFO] 10.244.0.3:47856 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000069179s
	[INFO] 10.244.1.2:54516 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129515s
	[INFO] 10.244.1.2:37969 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000111945s
	[INFO] 10.244.1.2:48556 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000108189s
	[INFO] 10.244.1.2:50848 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000103317s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e533ceba794b58bc3b02f0a58da4e04f4ce86956f27cdae2e26db00006a10614] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:33230 - 52201 "HINFO IN 806961655920265643.1201622078205259199. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.009922152s
	
	
	==> describe nodes <==
	Name:               multinode-269079
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-269079
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221
	                    minikube.k8s.io/name=multinode-269079
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T18_58_53_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 18:58:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-269079
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 19:07:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 19:05:40 +0000   Fri, 19 Jul 2024 18:58:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 19:05:40 +0000   Fri, 19 Jul 2024 18:58:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 19:05:40 +0000   Fri, 19 Jul 2024 18:58:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 19:05:40 +0000   Fri, 19 Jul 2024 18:59:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.154
	  Hostname:    multinode-269079
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 315647a683dc44f58933c98e307c48ab
	  System UUID:                315647a6-83dc-44f5-8933-c98e307c48ab
	  Boot ID:                    17ab67e5-8d67-437b-bbc2-3cd5e64f7683
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-6fcln                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m9s
	  kube-system                 coredns-7db6d8ff4d-xccsr                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m15s
	  kube-system                 etcd-multinode-269079                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m30s
	  kube-system                 kindnet-k2tw6                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m15s
	  kube-system                 kube-apiserver-multinode-269079             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m29s
	  kube-system                 kube-controller-manager-multinode-269079    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m29s
	  kube-system                 kube-proxy-sthg6                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m15s
	  kube-system                 kube-scheduler-multinode-269079             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m29s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m14s                  kube-proxy       
	  Normal  Starting                 100s                   kube-proxy       
	  Normal  Starting                 8m35s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m35s (x8 over 8m35s)  kubelet          Node multinode-269079 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m35s (x8 over 8m35s)  kubelet          Node multinode-269079 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m35s (x7 over 8m35s)  kubelet          Node multinode-269079 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m29s                  kubelet          Node multinode-269079 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  8m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m29s                  kubelet          Node multinode-269079 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     8m29s                  kubelet          Node multinode-269079 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m29s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m16s                  node-controller  Node multinode-269079 event: Registered Node multinode-269079 in Controller
	  Normal  NodeReady                8m                     kubelet          Node multinode-269079 status is now: NodeReady
	  Normal  Starting                 105s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  105s (x8 over 105s)    kubelet          Node multinode-269079 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    105s (x8 over 105s)    kubelet          Node multinode-269079 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     105s (x7 over 105s)    kubelet          Node multinode-269079 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  105s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           88s                    node-controller  Node multinode-269079 event: Registered Node multinode-269079 in Controller
	
	
	Name:               multinode-269079-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-269079-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221
	                    minikube.k8s.io/name=multinode-269079
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T19_06_21_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 19:06:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-269079-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 19:07:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 19:06:51 +0000   Fri, 19 Jul 2024 19:06:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 19:06:51 +0000   Fri, 19 Jul 2024 19:06:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 19:06:51 +0000   Fri, 19 Jul 2024 19:06:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 19:06:51 +0000   Fri, 19 Jul 2024 19:06:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.133
	  Hostname:    multinode-269079-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d1adbe5e8ade44e8a3c64a0ac3f667b1
	  System UUID:                d1adbe5e-8ade-44e8-a3c6-4a0ac3f667b1
	  Boot ID:                    1e2288ad-9b1f-49f0-b14b-584f458a8d2c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-np9b9    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 kindnet-fvjg5              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m31s
	  kube-system                 kube-proxy-drtlz           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m26s                  kube-proxy  
	  Normal  Starting                 56s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m32s (x2 over 7m32s)  kubelet     Node multinode-269079-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m32s (x2 over 7m32s)  kubelet     Node multinode-269079-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m32s (x2 over 7m32s)  kubelet     Node multinode-269079-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m31s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m11s                  kubelet     Node multinode-269079-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  61s (x2 over 61s)      kubelet     Node multinode-269079-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x2 over 61s)      kubelet     Node multinode-269079-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x2 over 61s)      kubelet     Node multinode-269079-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  61s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                41s                    kubelet     Node multinode-269079-m02 status is now: NodeReady
	
	
	Name:               multinode-269079-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-269079-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221
	                    minikube.k8s.io/name=multinode-269079
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T19_07_00_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 19:06:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-269079-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 19:07:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 19:07:18 +0000   Fri, 19 Jul 2024 19:06:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 19:07:18 +0000   Fri, 19 Jul 2024 19:06:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 19:07:18 +0000   Fri, 19 Jul 2024 19:06:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 19:07:18 +0000   Fri, 19 Jul 2024 19:07:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.165
	  Hostname:    multinode-269079-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6e09a645a7974af5883c70a1384fe120
	  System UUID:                6e09a645-a797-4af5-883c-70a1384fe120
	  Boot ID:                    c6d4e3b7-0674-490b-bd70-b91ae50d76ce
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-zqdz5       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m34s
	  kube-system                 kube-proxy-dd6vj    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m40s                  kube-proxy       
	  Normal  Starting                 6m28s                  kube-proxy       
	  Normal  Starting                 17s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  6m34s (x2 over 6m34s)  kubelet          Node multinode-269079-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m34s (x2 over 6m34s)  kubelet          Node multinode-269079-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m34s (x2 over 6m34s)  kubelet          Node multinode-269079-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m14s                  kubelet          Node multinode-269079-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m44s (x2 over 5m44s)  kubelet          Node multinode-269079-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m44s (x2 over 5m44s)  kubelet          Node multinode-269079-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  5m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m44s (x2 over 5m44s)  kubelet          Node multinode-269079-m03 status is now: NodeHasSufficientMemory
	  Normal  Starting                 5m44s                  kubelet          Starting kubelet.
	  Normal  NodeReady                5m25s                  kubelet          Node multinode-269079-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  22s (x2 over 22s)      kubelet          Node multinode-269079-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x2 over 22s)      kubelet          Node multinode-269079-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x2 over 22s)      kubelet          Node multinode-269079-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18s                    node-controller  Node multinode-269079-m03 event: Registered Node multinode-269079-m03 in Controller
	  Normal  NodeReady                3s                     kubelet          Node multinode-269079-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.055017] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057090] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.189970] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.124166] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.250379] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +3.959978] systemd-fstab-generator[757]: Ignoring "noauto" option for root device
	[  +3.864064] systemd-fstab-generator[934]: Ignoring "noauto" option for root device
	[  +0.058652] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.476582] systemd-fstab-generator[1269]: Ignoring "noauto" option for root device
	[  +0.072761] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.813464] kauditd_printk_skb: 18 callbacks suppressed
	[Jul19 18:59] systemd-fstab-generator[1451]: Ignoring "noauto" option for root device
	[  +5.144801] kauditd_printk_skb: 57 callbacks suppressed
	[Jul19 19:00] kauditd_printk_skb: 14 callbacks suppressed
	[Jul19 19:05] systemd-fstab-generator[2792]: Ignoring "noauto" option for root device
	[  +0.136932] systemd-fstab-generator[2804]: Ignoring "noauto" option for root device
	[  +0.174082] systemd-fstab-generator[2818]: Ignoring "noauto" option for root device
	[  +0.141141] systemd-fstab-generator[2830]: Ignoring "noauto" option for root device
	[  +0.259670] systemd-fstab-generator[2858]: Ignoring "noauto" option for root device
	[  +0.655461] systemd-fstab-generator[2957]: Ignoring "noauto" option for root device
	[  +2.067107] systemd-fstab-generator[3081]: Ignoring "noauto" option for root device
	[  +4.654071] kauditd_printk_skb: 184 callbacks suppressed
	[ +12.754769] kauditd_printk_skb: 32 callbacks suppressed
	[  +2.220194] systemd-fstab-generator[3921]: Ignoring "noauto" option for root device
	[Jul19 19:06] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [1d157f505891344f357ec23eb8fe312255a86a35c7b54c8ee387ccf0c9128b13] <==
	{"level":"info","ts":"2024-07-19T19:05:38.056109Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 switched to configuration voters=(1223707007001805620)"}
	{"level":"info","ts":"2024-07-19T19:05:38.056226Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bd4b2769e12dd4ff","local-member-id":"10fb7b0a157fc334","added-peer-id":"10fb7b0a157fc334","added-peer-peer-urls":["https://192.168.39.154:2380"]}
	{"level":"info","ts":"2024-07-19T19:05:38.056387Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bd4b2769e12dd4ff","local-member-id":"10fb7b0a157fc334","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T19:05:38.05643Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T19:05:38.080718Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-19T19:05:38.080924Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"10fb7b0a157fc334","initial-advertise-peer-urls":["https://192.168.39.154:2380"],"listen-peer-urls":["https://192.168.39.154:2380"],"advertise-client-urls":["https://192.168.39.154:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.154:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-19T19:05:38.080969Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-19T19:05:38.081128Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.154:2380"}
	{"level":"info","ts":"2024-07-19T19:05:38.081186Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.154:2380"}
	{"level":"info","ts":"2024-07-19T19:05:39.492408Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-19T19:05:39.492514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-19T19:05:39.492605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 received MsgPreVoteResp from 10fb7b0a157fc334 at term 2"}
	{"level":"info","ts":"2024-07-19T19:05:39.492645Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 became candidate at term 3"}
	{"level":"info","ts":"2024-07-19T19:05:39.492669Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 received MsgVoteResp from 10fb7b0a157fc334 at term 3"}
	{"level":"info","ts":"2024-07-19T19:05:39.492707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 became leader at term 3"}
	{"level":"info","ts":"2024-07-19T19:05:39.492737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 10fb7b0a157fc334 elected leader 10fb7b0a157fc334 at term 3"}
	{"level":"info","ts":"2024-07-19T19:05:39.498031Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T19:05:39.498056Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"10fb7b0a157fc334","local-member-attributes":"{Name:multinode-269079 ClientURLs:[https://192.168.39.154:2379]}","request-path":"/0/members/10fb7b0a157fc334/attributes","cluster-id":"bd4b2769e12dd4ff","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-19T19:05:39.499114Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T19:05:39.499778Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T19:05:39.499838Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-19T19:05:39.500603Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-19T19:05:39.501478Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.154:2379"}
	{"level":"info","ts":"2024-07-19T19:06:25.189556Z","caller":"traceutil/trace.go:171","msg":"trace[483146368] transaction","detail":"{read_only:false; response_revision:1089; number_of_response:1; }","duration":"112.914404ms","start":"2024-07-19T19:06:25.076616Z","end":"2024-07-19T19:06:25.18953Z","steps":["trace[483146368] 'process raft request'  (duration: 112.79748ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T19:07:04.136311Z","caller":"traceutil/trace.go:171","msg":"trace[200758484] transaction","detail":"{read_only:false; response_revision:1183; number_of_response:1; }","duration":"108.045242ms","start":"2024-07-19T19:07:04.028241Z","end":"2024-07-19T19:07:04.136286Z","steps":["trace[200758484] 'process raft request'  (duration: 107.183081ms)"],"step_count":1}
	
	
	==> etcd [68f992a866a01221fd939d28856ca7ec579717b7fe20a4c90e583139e69f1c48] <==
	{"level":"info","ts":"2024-07-19T18:58:48.513633Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T18:58:48.513772Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T18:58:48.516199Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T18:58:48.51623Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-19T18:58:48.51753Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.154:2379"}
	{"level":"info","ts":"2024-07-19T18:58:48.522888Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bd4b2769e12dd4ff","local-member-id":"10fb7b0a157fc334","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T18:58:48.523025Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T18:58:48.523057Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T18:58:48.52333Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-19T18:59:50.059342Z","caller":"traceutil/trace.go:171","msg":"trace[1305887973] transaction","detail":"{read_only:false; response_revision:492; number_of_response:1; }","duration":"111.265968ms","start":"2024-07-19T18:59:49.948042Z","end":"2024-07-19T18:59:50.059308Z","steps":["trace[1305887973] 'process raft request'  (duration: 111.140281ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T18:59:50.197837Z","caller":"traceutil/trace.go:171","msg":"trace[563691110] transaction","detail":"{read_only:false; response_revision:493; number_of_response:1; }","duration":"193.382139ms","start":"2024-07-19T18:59:50.004438Z","end":"2024-07-19T18:59:50.19782Z","steps":["trace[563691110] 'process raft request'  (duration: 187.99245ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T18:59:50.198004Z","caller":"traceutil/trace.go:171","msg":"trace[1315561461] transaction","detail":"{read_only:false; response_revision:494; number_of_response:1; }","duration":"134.732511ms","start":"2024-07-19T18:59:50.063256Z","end":"2024-07-19T18:59:50.197989Z","steps":["trace[1315561461] 'process raft request'  (duration: 134.383464ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T19:00:00.689527Z","caller":"traceutil/trace.go:171","msg":"trace[130620928] transaction","detail":"{read_only:false; response_revision:544; number_of_response:1; }","duration":"101.793106ms","start":"2024-07-19T19:00:00.587719Z","end":"2024-07-19T19:00:00.689512Z","steps":["trace[130620928] 'process raft request'  (duration: 100.775946ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T19:00:47.504524Z","caller":"traceutil/trace.go:171","msg":"trace[1374524115] transaction","detail":"{read_only:false; response_revision:631; number_of_response:1; }","duration":"242.988867ms","start":"2024-07-19T19:00:47.261516Z","end":"2024-07-19T19:00:47.504505Z","steps":["trace[1374524115] 'process raft request'  (duration: 227.469159ms)","trace[1374524115] 'compare'  (duration: 15.408424ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T19:00:47.505223Z","caller":"traceutil/trace.go:171","msg":"trace[1588051002] transaction","detail":"{read_only:false; response_revision:632; number_of_response:1; }","duration":"177.068172ms","start":"2024-07-19T19:00:47.328142Z","end":"2024-07-19T19:00:47.50521Z","steps":["trace[1588051002] 'process raft request'  (duration: 176.754897ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T19:04:01.980085Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-19T19:04:01.983819Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-269079","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.154:2380"],"advertise-client-urls":["https://192.168.39.154:2379"]}
	{"level":"warn","ts":"2024-07-19T19:04:01.984009Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.154:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T19:04:01.984048Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.154:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T19:04:01.984124Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T19:04:01.984246Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-19T19:04:02.045999Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"10fb7b0a157fc334","current-leader-member-id":"10fb7b0a157fc334"}
	{"level":"info","ts":"2024-07-19T19:04:02.048435Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.154:2380"}
	{"level":"info","ts":"2024-07-19T19:04:02.048691Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.154:2380"}
	{"level":"info","ts":"2024-07-19T19:04:02.048713Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-269079","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.154:2380"],"advertise-client-urls":["https://192.168.39.154:2379"]}
	
	
	==> kernel <==
	 19:07:22 up 9 min,  0 users,  load average: 0.03, 0.08, 0.07
	Linux multinode-269079 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [46fc9907bacdb8a8cfa273acb3043b189442ae850273319b4ace7655cbcde8ad] <==
	I0719 19:06:32.458040       1 main.go:322] Node multinode-269079-m02 has CIDR [10.244.1.0/24] 
	I0719 19:06:42.457825       1 main.go:295] Handling node with IPs: map[192.168.39.154:{}]
	I0719 19:06:42.457868       1 main.go:299] handling current node
	I0719 19:06:42.457883       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0719 19:06:42.457889       1 main.go:322] Node multinode-269079-m02 has CIDR [10.244.1.0/24] 
	I0719 19:06:42.458081       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0719 19:06:42.458100       1 main.go:322] Node multinode-269079-m03 has CIDR [10.244.3.0/24] 
	I0719 19:06:52.457951       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0719 19:06:52.458039       1 main.go:322] Node multinode-269079-m03 has CIDR [10.244.3.0/24] 
	I0719 19:06:52.458289       1 main.go:295] Handling node with IPs: map[192.168.39.154:{}]
	I0719 19:06:52.458310       1 main.go:299] handling current node
	I0719 19:06:52.458330       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0719 19:06:52.458335       1 main.go:322] Node multinode-269079-m02 has CIDR [10.244.1.0/24] 
	I0719 19:07:02.458304       1 main.go:295] Handling node with IPs: map[192.168.39.154:{}]
	I0719 19:07:02.458348       1 main.go:299] handling current node
	I0719 19:07:02.458365       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0719 19:07:02.458371       1 main.go:322] Node multinode-269079-m02 has CIDR [10.244.1.0/24] 
	I0719 19:07:02.458548       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0719 19:07:02.458568       1 main.go:322] Node multinode-269079-m03 has CIDR [10.244.2.0/24] 
	I0719 19:07:12.458277       1 main.go:295] Handling node with IPs: map[192.168.39.154:{}]
	I0719 19:07:12.458329       1 main.go:299] handling current node
	I0719 19:07:12.458344       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0719 19:07:12.458349       1 main.go:322] Node multinode-269079-m02 has CIDR [10.244.1.0/24] 
	I0719 19:07:12.458549       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0719 19:07:12.458610       1 main.go:322] Node multinode-269079-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [e75ab08afcd7afa95ec628bc9bf04c07e01d635090c69e2bb19057e80bd5ff66] <==
	I0719 19:03:21.457615       1 main.go:299] handling current node
	I0719 19:03:31.462343       1 main.go:295] Handling node with IPs: map[192.168.39.154:{}]
	I0719 19:03:31.462397       1 main.go:299] handling current node
	I0719 19:03:31.462424       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0719 19:03:31.462429       1 main.go:322] Node multinode-269079-m02 has CIDR [10.244.1.0/24] 
	I0719 19:03:31.462580       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0719 19:03:31.462599       1 main.go:322] Node multinode-269079-m03 has CIDR [10.244.3.0/24] 
	I0719 19:03:41.464804       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0719 19:03:41.464991       1 main.go:322] Node multinode-269079-m03 has CIDR [10.244.3.0/24] 
	I0719 19:03:41.465244       1 main.go:295] Handling node with IPs: map[192.168.39.154:{}]
	I0719 19:03:41.465280       1 main.go:299] handling current node
	I0719 19:03:41.465325       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0719 19:03:41.465343       1 main.go:322] Node multinode-269079-m02 has CIDR [10.244.1.0/24] 
	I0719 19:03:51.465074       1 main.go:295] Handling node with IPs: map[192.168.39.154:{}]
	I0719 19:03:51.465345       1 main.go:299] handling current node
	I0719 19:03:51.465392       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0719 19:03:51.465413       1 main.go:322] Node multinode-269079-m02 has CIDR [10.244.1.0/24] 
	I0719 19:03:51.465615       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0719 19:03:51.465650       1 main.go:322] Node multinode-269079-m03 has CIDR [10.244.3.0/24] 
	I0719 19:04:01.465307       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0719 19:04:01.465359       1 main.go:322] Node multinode-269079-m02 has CIDR [10.244.1.0/24] 
	I0719 19:04:01.465488       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0719 19:04:01.465509       1 main.go:322] Node multinode-269079-m03 has CIDR [10.244.3.0/24] 
	I0719 19:04:01.465563       1 main.go:295] Handling node with IPs: map[192.168.39.154:{}]
	I0719 19:04:01.465591       1 main.go:299] handling current node
	
	
	==> kube-apiserver [94bbb6dcaa9e8481e24ce8e895f693b8120a5acc59ce7a192da7a5a743d080de] <==
	I0719 19:05:40.758921       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0719 19:05:40.763562       1 shared_informer.go:320] Caches are synced for configmaps
	I0719 19:05:40.767239       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0719 19:05:40.767310       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0719 19:05:40.770370       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0719 19:05:40.791692       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0719 19:05:40.796020       1 aggregator.go:165] initial CRD sync complete...
	I0719 19:05:40.796065       1 autoregister_controller.go:141] Starting autoregister controller
	I0719 19:05:40.796072       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0719 19:05:40.796081       1 cache.go:39] Caches are synced for autoregister controller
	I0719 19:05:40.829061       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0719 19:05:40.829106       1 policy_source.go:224] refreshing policies
	E0719 19:05:40.856750       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0719 19:05:40.860776       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0719 19:05:40.862139       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0719 19:05:40.877887       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0719 19:05:40.882196       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0719 19:05:41.674885       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0719 19:05:42.468105       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0719 19:05:42.578968       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0719 19:05:42.590762       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0719 19:05:42.659601       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0719 19:05:42.667101       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0719 19:05:53.912126       1 controller.go:615] quota admission added evaluator for: endpoints
	I0719 19:05:54.006197       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [d45496ab592ce2cd7517e1885f2249bea0a1add7d091cdd9e5519ce84c40c831] <==
	I0719 18:58:50.981001       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0719 18:58:50.985141       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0719 18:58:50.985208       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0719 18:58:51.548518       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0719 18:58:51.593923       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0719 18:58:51.697114       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0719 18:58:51.702896       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.154]
	I0719 18:58:51.703812       1 controller.go:615] quota admission added evaluator for: endpoints
	I0719 18:58:51.707803       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0719 18:58:52.042789       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0719 18:58:52.934999       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0719 18:58:52.949403       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0719 18:58:52.963991       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0719 18:59:05.800239       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0719 18:59:05.999793       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0719 19:00:16.565434       1 conn.go:339] Error on socket receive: read tcp 192.168.39.154:8443->192.168.39.1:40278: use of closed network connection
	E0719 19:00:16.722840       1 conn.go:339] Error on socket receive: read tcp 192.168.39.154:8443->192.168.39.1:40296: use of closed network connection
	E0719 19:00:16.884231       1 conn.go:339] Error on socket receive: read tcp 192.168.39.154:8443->192.168.39.1:40312: use of closed network connection
	E0719 19:00:17.213354       1 conn.go:339] Error on socket receive: read tcp 192.168.39.154:8443->192.168.39.1:40334: use of closed network connection
	E0719 19:00:17.370117       1 conn.go:339] Error on socket receive: read tcp 192.168.39.154:8443->192.168.39.1:40362: use of closed network connection
	E0719 19:00:17.629997       1 conn.go:339] Error on socket receive: read tcp 192.168.39.154:8443->192.168.39.1:40382: use of closed network connection
	E0719 19:00:17.789347       1 conn.go:339] Error on socket receive: read tcp 192.168.39.154:8443->192.168.39.1:40402: use of closed network connection
	E0719 19:00:17.943338       1 conn.go:339] Error on socket receive: read tcp 192.168.39.154:8443->192.168.39.1:40426: use of closed network connection
	E0719 19:00:18.104143       1 conn.go:339] Error on socket receive: read tcp 192.168.39.154:8443->192.168.39.1:40430: use of closed network connection
	I0719 19:04:01.979843       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	
	==> kube-controller-manager [321b6c480a3713de53fd2e1bc97ebd21ede0deb84be28bc71b31c95c9496e5a0] <==
	I0719 18:59:50.201045       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-269079-m02\" does not exist"
	I0719 18:59:50.238090       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-269079-m02" podCIDRs=["10.244.1.0/24"]
	I0719 18:59:50.314427       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-269079-m02"
	I0719 19:00:10.055349       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-269079-m02"
	I0719 19:00:12.523099       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.594502ms"
	I0719 19:00:12.544196       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.436318ms"
	I0719 19:00:12.544336       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.417µs"
	I0719 19:00:12.555079       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.276µs"
	I0719 19:00:16.046444       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.475799ms"
	I0719 19:00:16.047220       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.372µs"
	I0719 19:00:16.168414       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.536343ms"
	I0719 19:00:16.169211       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.657µs"
	I0719 19:00:47.507630       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-269079-m03\" does not exist"
	I0719 19:00:47.507862       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-269079-m02"
	I0719 19:00:47.525525       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-269079-m03" podCIDRs=["10.244.2.0/24"]
	I0719 19:00:50.332332       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-269079-m03"
	I0719 19:01:07.876636       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-269079-m02"
	I0719 19:01:35.748244       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-269079-m02"
	I0719 19:01:37.184399       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-269079-m03\" does not exist"
	I0719 19:01:37.186804       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-269079-m02"
	I0719 19:01:37.206679       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-269079-m03" podCIDRs=["10.244.3.0/24"]
	I0719 19:01:56.599555       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-269079-m03"
	I0719 19:02:35.388315       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-269079-m03"
	I0719 19:02:35.424658       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.08126ms"
	I0719 19:02:35.424733       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.198µs"
	
	
	==> kube-controller-manager [3fbead3e675296f48b31a6bbd35249117e32374c47b3f82855779e42234a4d62] <==
	I0719 19:05:54.657333       1 shared_informer.go:320] Caches are synced for garbage collector
	I0719 19:05:54.657369       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0719 19:06:16.565632       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.376243ms"
	I0719 19:06:16.566143       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="175.936µs"
	I0719 19:06:16.581216       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.518661ms"
	I0719 19:06:16.592768       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.428951ms"
	I0719 19:06:16.592958       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.053µs"
	I0719 19:06:20.912960       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-269079-m02\" does not exist"
	I0719 19:06:20.922778       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-269079-m02" podCIDRs=["10.244.1.0/24"]
	I0719 19:06:22.809484       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.308µs"
	I0719 19:06:22.882288       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.322µs"
	I0719 19:06:22.902260       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.352µs"
	I0719 19:06:22.902430       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.903µs"
	I0719 19:06:22.907298       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.703µs"
	I0719 19:06:23.608966       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.177µs"
	I0719 19:06:40.707732       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-269079-m02"
	I0719 19:06:40.728842       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.131µs"
	I0719 19:06:40.742765       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.255µs"
	I0719 19:06:44.936226       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.848612ms"
	I0719 19:06:44.938335       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="85.857µs"
	I0719 19:06:58.534043       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-269079-m02"
	I0719 19:06:59.773005       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-269079-m03\" does not exist"
	I0719 19:06:59.774450       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-269079-m02"
	I0719 19:06:59.785898       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-269079-m03" podCIDRs=["10.244.2.0/24"]
	I0719 19:07:18.923133       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-269079-m02"
	
	
	==> kube-proxy [d5f5e7486a1a2d48f6df68e16b9fb442d5f76c20906fbef9f20a2126380085e0] <==
	I0719 19:05:41.586554       1 server_linux.go:69] "Using iptables proxy"
	I0719 19:05:41.610244       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.154"]
	I0719 19:05:41.650291       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 19:05:41.650347       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 19:05:41.650363       1 server_linux.go:165] "Using iptables Proxier"
	I0719 19:05:41.653843       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 19:05:41.654083       1 server.go:872] "Version info" version="v1.30.3"
	I0719 19:05:41.654248       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 19:05:41.658136       1 config.go:192] "Starting service config controller"
	I0719 19:05:41.658258       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 19:05:41.658342       1 config.go:101] "Starting endpoint slice config controller"
	I0719 19:05:41.658427       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 19:05:41.659677       1 config.go:319] "Starting node config controller"
	I0719 19:05:41.659714       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 19:05:41.761254       1 shared_informer.go:320] Caches are synced for service config
	I0719 19:05:41.761315       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 19:05:41.761241       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e71f310f55a559649349c09e79956008833bf0af02c07e8a146bb6869cef652e] <==
	I0719 18:59:07.453006       1 server_linux.go:69] "Using iptables proxy"
	I0719 18:59:07.464502       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.154"]
	I0719 18:59:07.497198       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 18:59:07.497224       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 18:59:07.497238       1 server_linux.go:165] "Using iptables Proxier"
	I0719 18:59:07.499411       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 18:59:07.499655       1 server.go:872] "Version info" version="v1.30.3"
	I0719 18:59:07.499666       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 18:59:07.500969       1 config.go:192] "Starting service config controller"
	I0719 18:59:07.501050       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 18:59:07.501098       1 config.go:101] "Starting endpoint slice config controller"
	I0719 18:59:07.501115       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 18:59:07.501633       1 config.go:319] "Starting node config controller"
	I0719 18:59:07.502578       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 18:59:07.601237       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 18:59:07.601247       1 shared_informer.go:320] Caches are synced for service config
	I0719 18:59:07.602951       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [06cf7dbdff6300c5e36fdabbbedbe294040c0b9f43818b301d3fb996c076e525] <==
	I0719 19:05:38.833048       1 serving.go:380] Generated self-signed cert in-memory
	W0719 19:05:40.707667       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0719 19:05:40.708297       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 19:05:40.708358       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0719 19:05:40.708383       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0719 19:05:40.770960       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0719 19:05:40.773476       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 19:05:40.786784       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0719 19:05:40.787909       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0719 19:05:40.791845       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 19:05:40.787935       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0719 19:05:40.892630       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [64edcccff1ef0315c913f1b9f8faf0f09b13456c8de36295b52578d5fde4ac43] <==
	W0719 18:58:50.081528       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0719 18:58:50.081550       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0719 18:58:50.081594       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 18:58:50.081617       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 18:58:50.081662       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 18:58:50.081684       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0719 18:58:50.081729       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0719 18:58:50.081750       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0719 18:58:50.937018       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0719 18:58:50.937188       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0719 18:58:51.034745       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0719 18:58:51.034875       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0719 18:58:51.094815       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0719 18:58:51.094899       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0719 18:58:51.122460       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0719 18:58:51.122600       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0719 18:58:51.267735       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 18:58:51.267847       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 18:58:51.389609       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 18:58:51.389727       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0719 18:58:53.563649       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 19:04:01.982130       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0719 19:04:01.984029       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0719 19:04:01.984404       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0719 19:04:01.985477       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 19 19:05:37 multinode-269079 kubelet[3088]: E0719 19:05:37.520693    3088 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.154:8443: connect: connection refused" node="multinode-269079"
	Jul 19 19:05:38 multinode-269079 kubelet[3088]: I0719 19:05:38.322399    3088 kubelet_node_status.go:73] "Attempting to register node" node="multinode-269079"
	Jul 19 19:05:40 multinode-269079 kubelet[3088]: I0719 19:05:40.799392    3088 apiserver.go:52] "Watching apiserver"
	Jul 19 19:05:40 multinode-269079 kubelet[3088]: I0719 19:05:40.813122    3088 topology_manager.go:215] "Topology Admit Handler" podUID="fe26d277-d2d1-4f90-b959-f5c5cab1e3b4" podNamespace="kube-system" podName="coredns-7db6d8ff4d-xccsr"
	Jul 19 19:05:40 multinode-269079 kubelet[3088]: I0719 19:05:40.815427    3088 topology_manager.go:215] "Topology Admit Handler" podUID="26619429-a87d-4b20-859c-ccefabcd5d0c" podNamespace="kube-system" podName="kube-proxy-sthg6"
	Jul 19 19:05:40 multinode-269079 kubelet[3088]: I0719 19:05:40.815748    3088 topology_manager.go:215] "Topology Admit Handler" podUID="c03ec807-2f25-4220-aade-72b3b186cbff" podNamespace="kube-system" podName="kindnet-k2tw6"
	Jul 19 19:05:40 multinode-269079 kubelet[3088]: I0719 19:05:40.815963    3088 topology_manager.go:215] "Topology Admit Handler" podUID="0bcd5d04-5e58-4c45-b920-3e5c914cb20a" podNamespace="kube-system" podName="storage-provisioner"
	Jul 19 19:05:40 multinode-269079 kubelet[3088]: I0719 19:05:40.816103    3088 topology_manager.go:215] "Topology Admit Handler" podUID="f6cd030c-4433-4384-afb6-224e9298f440" podNamespace="default" podName="busybox-fc5497c4f-6fcln"
	Jul 19 19:05:40 multinode-269079 kubelet[3088]: I0719 19:05:40.827308    3088 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 19 19:05:40 multinode-269079 kubelet[3088]: I0719 19:05:40.827383    3088 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/26619429-a87d-4b20-859c-ccefabcd5d0c-xtables-lock\") pod \"kube-proxy-sthg6\" (UID: \"26619429-a87d-4b20-859c-ccefabcd5d0c\") " pod="kube-system/kube-proxy-sthg6"
	Jul 19 19:05:40 multinode-269079 kubelet[3088]: I0719 19:05:40.827418    3088 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0bcd5d04-5e58-4c45-b920-3e5c914cb20a-tmp\") pod \"storage-provisioner\" (UID: \"0bcd5d04-5e58-4c45-b920-3e5c914cb20a\") " pod="kube-system/storage-provisioner"
	Jul 19 19:05:40 multinode-269079 kubelet[3088]: I0719 19:05:40.827433    3088 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/26619429-a87d-4b20-859c-ccefabcd5d0c-lib-modules\") pod \"kube-proxy-sthg6\" (UID: \"26619429-a87d-4b20-859c-ccefabcd5d0c\") " pod="kube-system/kube-proxy-sthg6"
	Jul 19 19:05:40 multinode-269079 kubelet[3088]: I0719 19:05:40.917120    3088 kubelet_node_status.go:112] "Node was previously registered" node="multinode-269079"
	Jul 19 19:05:40 multinode-269079 kubelet[3088]: I0719 19:05:40.917309    3088 kubelet_node_status.go:76] "Successfully registered node" node="multinode-269079"
	Jul 19 19:05:40 multinode-269079 kubelet[3088]: I0719 19:05:40.919004    3088 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 19 19:05:40 multinode-269079 kubelet[3088]: I0719 19:05:40.920218    3088 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 19 19:05:40 multinode-269079 kubelet[3088]: I0719 19:05:40.928231    3088 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c03ec807-2f25-4220-aade-72b3b186cbff-cni-cfg\") pod \"kindnet-k2tw6\" (UID: \"c03ec807-2f25-4220-aade-72b3b186cbff\") " pod="kube-system/kindnet-k2tw6"
	Jul 19 19:05:40 multinode-269079 kubelet[3088]: I0719 19:05:40.928315    3088 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c03ec807-2f25-4220-aade-72b3b186cbff-xtables-lock\") pod \"kindnet-k2tw6\" (UID: \"c03ec807-2f25-4220-aade-72b3b186cbff\") " pod="kube-system/kindnet-k2tw6"
	Jul 19 19:05:40 multinode-269079 kubelet[3088]: I0719 19:05:40.928435    3088 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c03ec807-2f25-4220-aade-72b3b186cbff-lib-modules\") pod \"kindnet-k2tw6\" (UID: \"c03ec807-2f25-4220-aade-72b3b186cbff\") " pod="kube-system/kindnet-k2tw6"
	Jul 19 19:05:46 multinode-269079 kubelet[3088]: I0719 19:05:46.074953    3088 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 19 19:06:36 multinode-269079 kubelet[3088]: E0719 19:06:36.855569    3088 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 19:06:36 multinode-269079 kubelet[3088]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 19:06:36 multinode-269079 kubelet[3088]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 19:06:36 multinode-269079 kubelet[3088]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 19:06:36 multinode-269079 kubelet[3088]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 19:07:21.267373   52294 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19307-14841/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-269079 -n multinode-269079
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-269079 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (323.78s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 stop
E0719 19:07:32.427539   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
E0719 19:07:46.569529   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/functional-788436/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-269079 stop: exit status 82 (2m0.457459178s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-269079-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-269079 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 status
E0719 19:09:43.525113   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/functional-788436/client.crt: no such file or directory
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-269079 status: exit status 3 (18.75835683s)

                                                
                                                
-- stdout --
	multinode-269079
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-269079-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 19:09:44.448204   52955 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.133:22: connect: no route to host
	E0719 19:09:44.448251   52955 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.133:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-269079 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-269079 -n multinode-269079
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-269079 logs -n 25: (1.392392239s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-269079 ssh -n                                                                 | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | multinode-269079-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-269079 cp multinode-269079-m02:/home/docker/cp-test.txt                       | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | multinode-269079:/home/docker/cp-test_multinode-269079-m02_multinode-269079.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-269079 ssh -n                                                                 | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | multinode-269079-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-269079 ssh -n multinode-269079 sudo cat                                       | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | /home/docker/cp-test_multinode-269079-m02_multinode-269079.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-269079 cp multinode-269079-m02:/home/docker/cp-test.txt                       | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | multinode-269079-m03:/home/docker/cp-test_multinode-269079-m02_multinode-269079-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-269079 ssh -n                                                                 | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | multinode-269079-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-269079 ssh -n multinode-269079-m03 sudo cat                                   | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | /home/docker/cp-test_multinode-269079-m02_multinode-269079-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-269079 cp testdata/cp-test.txt                                                | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | multinode-269079-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-269079 ssh -n                                                                 | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | multinode-269079-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-269079 cp multinode-269079-m03:/home/docker/cp-test.txt                       | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1789310348/001/cp-test_multinode-269079-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-269079 ssh -n                                                                 | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | multinode-269079-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-269079 cp multinode-269079-m03:/home/docker/cp-test.txt                       | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | multinode-269079:/home/docker/cp-test_multinode-269079-m03_multinode-269079.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-269079 ssh -n                                                                 | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | multinode-269079-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-269079 ssh -n multinode-269079 sudo cat                                       | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | /home/docker/cp-test_multinode-269079-m03_multinode-269079.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-269079 cp multinode-269079-m03:/home/docker/cp-test.txt                       | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | multinode-269079-m02:/home/docker/cp-test_multinode-269079-m03_multinode-269079-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-269079 ssh -n                                                                 | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | multinode-269079-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-269079 ssh -n multinode-269079-m02 sudo cat                                   | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | /home/docker/cp-test_multinode-269079-m03_multinode-269079-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-269079 node stop m03                                                          | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	| node    | multinode-269079 node start                                                             | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-269079                                                                | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC |                     |
	| stop    | -p multinode-269079                                                                     | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC |                     |
	| start   | -p multinode-269079                                                                     | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:04 UTC | 19 Jul 24 19:07 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-269079                                                                | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:07 UTC |                     |
	| node    | multinode-269079 node delete                                                            | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:07 UTC | 19 Jul 24 19:07 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-269079 stop                                                                   | multinode-269079 | jenkins | v1.33.1 | 19 Jul 24 19:07 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 19:04:01
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 19:04:01.037115   51196 out.go:291] Setting OutFile to fd 1 ...
	I0719 19:04:01.037237   51196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:04:01.037247   51196 out.go:304] Setting ErrFile to fd 2...
	I0719 19:04:01.037254   51196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:04:01.037438   51196 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 19:04:01.037988   51196 out.go:298] Setting JSON to false
	I0719 19:04:01.038897   51196 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6384,"bootTime":1721409457,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 19:04:01.038948   51196 start.go:139] virtualization: kvm guest
	I0719 19:04:01.041278   51196 out.go:177] * [multinode-269079] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 19:04:01.042682   51196 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 19:04:01.042688   51196 notify.go:220] Checking for updates...
	I0719 19:04:01.044038   51196 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 19:04:01.045403   51196 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:04:01.046689   51196 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 19:04:01.047784   51196 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 19:04:01.048928   51196 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 19:04:01.050403   51196 config.go:182] Loaded profile config "multinode-269079": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:04:01.050498   51196 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 19:04:01.050890   51196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 19:04:01.050943   51196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:04:01.066545   51196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38027
	I0719 19:04:01.066928   51196 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:04:01.067589   51196 main.go:141] libmachine: Using API Version  1
	I0719 19:04:01.067614   51196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:04:01.068025   51196 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:04:01.068294   51196 main.go:141] libmachine: (multinode-269079) Calling .DriverName
	I0719 19:04:01.103449   51196 out.go:177] * Using the kvm2 driver based on existing profile
	I0719 19:04:01.104785   51196 start.go:297] selected driver: kvm2
	I0719 19:04:01.104815   51196 start.go:901] validating driver "kvm2" against &{Name:multinode-269079 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-269079 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.165 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:04:01.104994   51196 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 19:04:01.105434   51196 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 19:04:01.105516   51196 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19307-14841/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 19:04:01.119801   51196 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 19:04:01.120915   51196 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 19:04:01.120991   51196 cni.go:84] Creating CNI manager for ""
	I0719 19:04:01.121007   51196 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0719 19:04:01.121081   51196 start.go:340] cluster config:
	{Name:multinode-269079 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-269079 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.165 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:04:01.121229   51196 iso.go:125] acquiring lock: {Name:mka126a3710ab8a115eb2a3299b735524430e5f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 19:04:01.124102   51196 out.go:177] * Starting "multinode-269079" primary control-plane node in "multinode-269079" cluster
	I0719 19:04:01.125481   51196 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 19:04:01.125511   51196 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 19:04:01.125517   51196 cache.go:56] Caching tarball of preloaded images
	I0719 19:04:01.125581   51196 preload.go:172] Found /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 19:04:01.125592   51196 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 19:04:01.125698   51196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/multinode-269079/config.json ...
	I0719 19:04:01.125872   51196 start.go:360] acquireMachinesLock for multinode-269079: {Name:mk45aeee801587237bb965fa260501e9fac6f233 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 19:04:01.125927   51196 start.go:364] duration metric: took 38.861µs to acquireMachinesLock for "multinode-269079"
	I0719 19:04:01.125939   51196 start.go:96] Skipping create...Using existing machine configuration
	I0719 19:04:01.125946   51196 fix.go:54] fixHost starting: 
	I0719 19:04:01.126181   51196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 19:04:01.126213   51196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:04:01.140179   51196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45007
	I0719 19:04:01.140583   51196 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:04:01.141001   51196 main.go:141] libmachine: Using API Version  1
	I0719 19:04:01.141021   51196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:04:01.141352   51196 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:04:01.141552   51196 main.go:141] libmachine: (multinode-269079) Calling .DriverName
	I0719 19:04:01.141728   51196 main.go:141] libmachine: (multinode-269079) Calling .GetState
	I0719 19:04:01.143505   51196 fix.go:112] recreateIfNeeded on multinode-269079: state=Running err=<nil>
	W0719 19:04:01.143540   51196 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 19:04:01.145532   51196 out.go:177] * Updating the running kvm2 "multinode-269079" VM ...
	I0719 19:04:01.146806   51196 machine.go:94] provisionDockerMachine start ...
	I0719 19:04:01.146829   51196 main.go:141] libmachine: (multinode-269079) Calling .DriverName
	I0719 19:04:01.147081   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHHostname
	I0719 19:04:01.149519   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:04:01.149983   51196 main.go:141] libmachine: (multinode-269079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:cc:9c", ip: ""} in network mk-multinode-269079: {Iface:virbr1 ExpiryTime:2024-07-19 19:58:29 +0000 UTC Type:0 Mac:52:54:00:e2:cc:9c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:multinode-269079 Clientid:01:52:54:00:e2:cc:9c}
	I0719 19:04:01.150030   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined IP address 192.168.39.154 and MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:04:01.150198   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHPort
	I0719 19:04:01.150388   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHKeyPath
	I0719 19:04:01.150575   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHKeyPath
	I0719 19:04:01.150727   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHUsername
	I0719 19:04:01.150874   51196 main.go:141] libmachine: Using SSH client type: native
	I0719 19:04:01.151107   51196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0719 19:04:01.151120   51196 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 19:04:01.264601   51196 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-269079
	
	I0719 19:04:01.264634   51196 main.go:141] libmachine: (multinode-269079) Calling .GetMachineName
	I0719 19:04:01.264902   51196 buildroot.go:166] provisioning hostname "multinode-269079"
	I0719 19:04:01.264929   51196 main.go:141] libmachine: (multinode-269079) Calling .GetMachineName
	I0719 19:04:01.265116   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHHostname
	I0719 19:04:01.267800   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:04:01.268221   51196 main.go:141] libmachine: (multinode-269079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:cc:9c", ip: ""} in network mk-multinode-269079: {Iface:virbr1 ExpiryTime:2024-07-19 19:58:29 +0000 UTC Type:0 Mac:52:54:00:e2:cc:9c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:multinode-269079 Clientid:01:52:54:00:e2:cc:9c}
	I0719 19:04:01.268249   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined IP address 192.168.39.154 and MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:04:01.268399   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHPort
	I0719 19:04:01.268580   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHKeyPath
	I0719 19:04:01.268754   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHKeyPath
	I0719 19:04:01.268875   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHUsername
	I0719 19:04:01.269064   51196 main.go:141] libmachine: Using SSH client type: native
	I0719 19:04:01.269264   51196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0719 19:04:01.269278   51196 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-269079 && echo "multinode-269079" | sudo tee /etc/hostname
	I0719 19:04:01.394010   51196 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-269079
	
	I0719 19:04:01.394049   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHHostname
	I0719 19:04:01.396908   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:04:01.397253   51196 main.go:141] libmachine: (multinode-269079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:cc:9c", ip: ""} in network mk-multinode-269079: {Iface:virbr1 ExpiryTime:2024-07-19 19:58:29 +0000 UTC Type:0 Mac:52:54:00:e2:cc:9c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:multinode-269079 Clientid:01:52:54:00:e2:cc:9c}
	I0719 19:04:01.397275   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined IP address 192.168.39.154 and MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:04:01.397467   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHPort
	I0719 19:04:01.397670   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHKeyPath
	I0719 19:04:01.397831   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHKeyPath
	I0719 19:04:01.397984   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHUsername
	I0719 19:04:01.398152   51196 main.go:141] libmachine: Using SSH client type: native
	I0719 19:04:01.398351   51196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0719 19:04:01.398368   51196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-269079' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-269079/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-269079' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 19:04:01.508292   51196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:04:01.508328   51196 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 19:04:01.508373   51196 buildroot.go:174] setting up certificates
	I0719 19:04:01.508387   51196 provision.go:84] configureAuth start
	I0719 19:04:01.508406   51196 main.go:141] libmachine: (multinode-269079) Calling .GetMachineName
	I0719 19:04:01.508706   51196 main.go:141] libmachine: (multinode-269079) Calling .GetIP
	I0719 19:04:01.511262   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:04:01.511692   51196 main.go:141] libmachine: (multinode-269079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:cc:9c", ip: ""} in network mk-multinode-269079: {Iface:virbr1 ExpiryTime:2024-07-19 19:58:29 +0000 UTC Type:0 Mac:52:54:00:e2:cc:9c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:multinode-269079 Clientid:01:52:54:00:e2:cc:9c}
	I0719 19:04:01.511714   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined IP address 192.168.39.154 and MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:04:01.511892   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHHostname
	I0719 19:04:01.513933   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:04:01.514314   51196 main.go:141] libmachine: (multinode-269079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:cc:9c", ip: ""} in network mk-multinode-269079: {Iface:virbr1 ExpiryTime:2024-07-19 19:58:29 +0000 UTC Type:0 Mac:52:54:00:e2:cc:9c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:multinode-269079 Clientid:01:52:54:00:e2:cc:9c}
	I0719 19:04:01.514359   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined IP address 192.168.39.154 and MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:04:01.514437   51196 provision.go:143] copyHostCerts
	I0719 19:04:01.514483   51196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 19:04:01.514519   51196 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 19:04:01.514526   51196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 19:04:01.514597   51196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 19:04:01.514670   51196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 19:04:01.514687   51196 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 19:04:01.514691   51196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 19:04:01.514716   51196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 19:04:01.514754   51196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 19:04:01.514769   51196 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 19:04:01.514776   51196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 19:04:01.514796   51196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 19:04:01.514838   51196 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.multinode-269079 san=[127.0.0.1 192.168.39.154 localhost minikube multinode-269079]
	I0719 19:04:01.695598   51196 provision.go:177] copyRemoteCerts
	I0719 19:04:01.695658   51196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 19:04:01.695689   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHHostname
	I0719 19:04:01.698215   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:04:01.698625   51196 main.go:141] libmachine: (multinode-269079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:cc:9c", ip: ""} in network mk-multinode-269079: {Iface:virbr1 ExpiryTime:2024-07-19 19:58:29 +0000 UTC Type:0 Mac:52:54:00:e2:cc:9c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:multinode-269079 Clientid:01:52:54:00:e2:cc:9c}
	I0719 19:04:01.698655   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined IP address 192.168.39.154 and MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:04:01.698825   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHPort
	I0719 19:04:01.699000   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHKeyPath
	I0719 19:04:01.699120   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHUsername
	I0719 19:04:01.699268   51196 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/multinode-269079/id_rsa Username:docker}
	I0719 19:04:01.786134   51196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0719 19:04:01.786348   51196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 19:04:01.810290   51196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0719 19:04:01.810396   51196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0719 19:04:01.832627   51196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0719 19:04:01.832699   51196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 19:04:01.855512   51196 provision.go:87] duration metric: took 347.10822ms to configureAuth
	I0719 19:04:01.855537   51196 buildroot.go:189] setting minikube options for container-runtime
	I0719 19:04:01.855781   51196 config.go:182] Loaded profile config "multinode-269079": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:04:01.855895   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHHostname
	I0719 19:04:01.858861   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:04:01.859307   51196 main.go:141] libmachine: (multinode-269079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:cc:9c", ip: ""} in network mk-multinode-269079: {Iface:virbr1 ExpiryTime:2024-07-19 19:58:29 +0000 UTC Type:0 Mac:52:54:00:e2:cc:9c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:multinode-269079 Clientid:01:52:54:00:e2:cc:9c}
	I0719 19:04:01.859333   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined IP address 192.168.39.154 and MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:04:01.859545   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHPort
	I0719 19:04:01.859731   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHKeyPath
	I0719 19:04:01.859903   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHKeyPath
	I0719 19:04:01.860071   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHUsername
	I0719 19:04:01.860236   51196 main.go:141] libmachine: Using SSH client type: native
	I0719 19:04:01.860406   51196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0719 19:04:01.860420   51196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 19:05:32.568014   51196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 19:05:32.568072   51196 machine.go:97] duration metric: took 1m31.421249625s to provisionDockerMachine
	I0719 19:05:32.568099   51196 start.go:293] postStartSetup for "multinode-269079" (driver="kvm2")
	I0719 19:05:32.568127   51196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 19:05:32.568164   51196 main.go:141] libmachine: (multinode-269079) Calling .DriverName
	I0719 19:05:32.568570   51196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 19:05:32.568598   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHHostname
	I0719 19:05:32.571573   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:05:32.571974   51196 main.go:141] libmachine: (multinode-269079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:cc:9c", ip: ""} in network mk-multinode-269079: {Iface:virbr1 ExpiryTime:2024-07-19 19:58:29 +0000 UTC Type:0 Mac:52:54:00:e2:cc:9c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:multinode-269079 Clientid:01:52:54:00:e2:cc:9c}
	I0719 19:05:32.571996   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined IP address 192.168.39.154 and MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:05:32.572271   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHPort
	I0719 19:05:32.572484   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHKeyPath
	I0719 19:05:32.572648   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHUsername
	I0719 19:05:32.572810   51196 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/multinode-269079/id_rsa Username:docker}
	I0719 19:05:32.658486   51196 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 19:05:32.662372   51196 command_runner.go:130] > NAME=Buildroot
	I0719 19:05:32.662389   51196 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0719 19:05:32.662395   51196 command_runner.go:130] > ID=buildroot
	I0719 19:05:32.662403   51196 command_runner.go:130] > VERSION_ID=2023.02.9
	I0719 19:05:32.662410   51196 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0719 19:05:32.662509   51196 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 19:05:32.662545   51196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 19:05:32.662638   51196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 19:05:32.662718   51196 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 19:05:32.662727   51196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> /etc/ssl/certs/220282.pem
	I0719 19:05:32.662804   51196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 19:05:32.672061   51196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:05:32.693504   51196 start.go:296] duration metric: took 125.387003ms for postStartSetup
	I0719 19:05:32.693578   51196 fix.go:56] duration metric: took 1m31.567616605s for fixHost
	I0719 19:05:32.693605   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHHostname
	I0719 19:05:32.696353   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:05:32.696822   51196 main.go:141] libmachine: (multinode-269079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:cc:9c", ip: ""} in network mk-multinode-269079: {Iface:virbr1 ExpiryTime:2024-07-19 19:58:29 +0000 UTC Type:0 Mac:52:54:00:e2:cc:9c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:multinode-269079 Clientid:01:52:54:00:e2:cc:9c}
	I0719 19:05:32.696853   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined IP address 192.168.39.154 and MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:05:32.697040   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHPort
	I0719 19:05:32.697232   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHKeyPath
	I0719 19:05:32.697385   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHKeyPath
	I0719 19:05:32.697546   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHUsername
	I0719 19:05:32.697735   51196 main.go:141] libmachine: Using SSH client type: native
	I0719 19:05:32.697940   51196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0719 19:05:32.697954   51196 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 19:05:32.808126   51196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721415932.778404010
	
	I0719 19:05:32.808154   51196 fix.go:216] guest clock: 1721415932.778404010
	I0719 19:05:32.808165   51196 fix.go:229] Guest: 2024-07-19 19:05:32.77840401 +0000 UTC Remote: 2024-07-19 19:05:32.693589706 +0000 UTC m=+91.690721572 (delta=84.814304ms)
	I0719 19:05:32.808204   51196 fix.go:200] guest clock delta is within tolerance: 84.814304ms
	I0719 19:05:32.808216   51196 start.go:83] releasing machines lock for "multinode-269079", held for 1m31.682280593s
	I0719 19:05:32.808244   51196 main.go:141] libmachine: (multinode-269079) Calling .DriverName
	I0719 19:05:32.808484   51196 main.go:141] libmachine: (multinode-269079) Calling .GetIP
	I0719 19:05:32.810925   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:05:32.811281   51196 main.go:141] libmachine: (multinode-269079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:cc:9c", ip: ""} in network mk-multinode-269079: {Iface:virbr1 ExpiryTime:2024-07-19 19:58:29 +0000 UTC Type:0 Mac:52:54:00:e2:cc:9c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:multinode-269079 Clientid:01:52:54:00:e2:cc:9c}
	I0719 19:05:32.811305   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined IP address 192.168.39.154 and MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:05:32.811440   51196 main.go:141] libmachine: (multinode-269079) Calling .DriverName
	I0719 19:05:32.811894   51196 main.go:141] libmachine: (multinode-269079) Calling .DriverName
	I0719 19:05:32.812072   51196 main.go:141] libmachine: (multinode-269079) Calling .DriverName
	I0719 19:05:32.812162   51196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 19:05:32.812215   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHHostname
	I0719 19:05:32.812276   51196 ssh_runner.go:195] Run: cat /version.json
	I0719 19:05:32.812298   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHHostname
	I0719 19:05:32.814733   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:05:32.814993   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:05:32.815049   51196 main.go:141] libmachine: (multinode-269079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:cc:9c", ip: ""} in network mk-multinode-269079: {Iface:virbr1 ExpiryTime:2024-07-19 19:58:29 +0000 UTC Type:0 Mac:52:54:00:e2:cc:9c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:multinode-269079 Clientid:01:52:54:00:e2:cc:9c}
	I0719 19:05:32.815076   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined IP address 192.168.39.154 and MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:05:32.815194   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHPort
	I0719 19:05:32.815366   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHKeyPath
	I0719 19:05:32.815463   51196 main.go:141] libmachine: (multinode-269079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:cc:9c", ip: ""} in network mk-multinode-269079: {Iface:virbr1 ExpiryTime:2024-07-19 19:58:29 +0000 UTC Type:0 Mac:52:54:00:e2:cc:9c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:multinode-269079 Clientid:01:52:54:00:e2:cc:9c}
	I0719 19:05:32.815499   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHUsername
	I0719 19:05:32.815550   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined IP address 192.168.39.154 and MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:05:32.815790   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHPort
	I0719 19:05:32.815804   51196 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/multinode-269079/id_rsa Username:docker}
	I0719 19:05:32.815969   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHKeyPath
	I0719 19:05:32.816111   51196 main.go:141] libmachine: (multinode-269079) Calling .GetSSHUsername
	I0719 19:05:32.816410   51196 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/multinode-269079/id_rsa Username:docker}
	I0719 19:05:32.922502   51196 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0719 19:05:32.923209   51196 command_runner.go:130] > {"iso_version": "v1.33.1-1721324531-19298", "kicbase_version": "v0.0.44-1721234491-19282", "minikube_version": "v1.33.1", "commit": "0e13329c5f674facda20b63833c6d01811d249dd"}
	I0719 19:05:32.923385   51196 ssh_runner.go:195] Run: systemctl --version
	I0719 19:05:32.929373   51196 command_runner.go:130] > systemd 252 (252)
	I0719 19:05:32.929417   51196 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0719 19:05:32.929476   51196 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 19:05:33.087670   51196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0719 19:05:33.093106   51196 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0719 19:05:33.093211   51196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 19:05:33.093276   51196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 19:05:33.101762   51196 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0719 19:05:33.101782   51196 start.go:495] detecting cgroup driver to use...
	I0719 19:05:33.101847   51196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 19:05:33.117412   51196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 19:05:33.130102   51196 docker.go:217] disabling cri-docker service (if available) ...
	I0719 19:05:33.130141   51196 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 19:05:33.142334   51196 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 19:05:33.154451   51196 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 19:05:33.291162   51196 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 19:05:33.435093   51196 docker.go:233] disabling docker service ...
	I0719 19:05:33.435175   51196 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 19:05:33.453492   51196 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 19:05:33.466188   51196 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 19:05:33.605482   51196 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 19:05:33.738300   51196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 19:05:33.751249   51196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 19:05:33.769874   51196 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0719 19:05:33.769948   51196 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 19:05:33.770005   51196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:05:33.779580   51196 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 19:05:33.779648   51196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:05:33.788868   51196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:05:33.799124   51196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:05:33.809610   51196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 19:05:33.820000   51196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:05:33.830339   51196 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:05:33.839930   51196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:05:33.849421   51196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 19:05:33.858055   51196 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0719 19:05:33.858109   51196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 19:05:33.866614   51196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:05:34.003103   51196 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 19:05:34.228499   51196 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 19:05:34.228572   51196 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 19:05:34.233012   51196 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0719 19:05:34.233032   51196 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0719 19:05:34.233039   51196 command_runner.go:130] > Device: 0,22	Inode: 1328        Links: 1
	I0719 19:05:34.233049   51196 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0719 19:05:34.233057   51196 command_runner.go:130] > Access: 2024-07-19 19:05:34.105026150 +0000
	I0719 19:05:34.233068   51196 command_runner.go:130] > Modify: 2024-07-19 19:05:34.105026150 +0000
	I0719 19:05:34.233079   51196 command_runner.go:130] > Change: 2024-07-19 19:05:34.105026150 +0000
	I0719 19:05:34.233085   51196 command_runner.go:130] >  Birth: -
	I0719 19:05:34.233119   51196 start.go:563] Will wait 60s for crictl version
	I0719 19:05:34.233166   51196 ssh_runner.go:195] Run: which crictl
	I0719 19:05:34.236747   51196 command_runner.go:130] > /usr/bin/crictl
	I0719 19:05:34.236807   51196 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 19:05:34.274436   51196 command_runner.go:130] > Version:  0.1.0
	I0719 19:05:34.274462   51196 command_runner.go:130] > RuntimeName:  cri-o
	I0719 19:05:34.274468   51196 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0719 19:05:34.274475   51196 command_runner.go:130] > RuntimeApiVersion:  v1
	I0719 19:05:34.274525   51196 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 19:05:34.274615   51196 ssh_runner.go:195] Run: crio --version
	I0719 19:05:34.299381   51196 command_runner.go:130] > crio version 1.29.1
	I0719 19:05:34.299408   51196 command_runner.go:130] > Version:        1.29.1
	I0719 19:05:34.299416   51196 command_runner.go:130] > GitCommit:      unknown
	I0719 19:05:34.299423   51196 command_runner.go:130] > GitCommitDate:  unknown
	I0719 19:05:34.299429   51196 command_runner.go:130] > GitTreeState:   clean
	I0719 19:05:34.299439   51196 command_runner.go:130] > BuildDate:      2024-07-18T22:57:15Z
	I0719 19:05:34.299445   51196 command_runner.go:130] > GoVersion:      go1.21.6
	I0719 19:05:34.299452   51196 command_runner.go:130] > Compiler:       gc
	I0719 19:05:34.299459   51196 command_runner.go:130] > Platform:       linux/amd64
	I0719 19:05:34.299466   51196 command_runner.go:130] > Linkmode:       dynamic
	I0719 19:05:34.299473   51196 command_runner.go:130] > BuildTags:      
	I0719 19:05:34.299483   51196 command_runner.go:130] >   containers_image_ostree_stub
	I0719 19:05:34.299489   51196 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0719 19:05:34.299496   51196 command_runner.go:130] >   btrfs_noversion
	I0719 19:05:34.299507   51196 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0719 19:05:34.299513   51196 command_runner.go:130] >   libdm_no_deferred_remove
	I0719 19:05:34.299524   51196 command_runner.go:130] >   seccomp
	I0719 19:05:34.299530   51196 command_runner.go:130] > LDFlags:          unknown
	I0719 19:05:34.299536   51196 command_runner.go:130] > SeccompEnabled:   true
	I0719 19:05:34.299543   51196 command_runner.go:130] > AppArmorEnabled:  false
	I0719 19:05:34.300683   51196 ssh_runner.go:195] Run: crio --version
	I0719 19:05:34.326060   51196 command_runner.go:130] > crio version 1.29.1
	I0719 19:05:34.326084   51196 command_runner.go:130] > Version:        1.29.1
	I0719 19:05:34.326101   51196 command_runner.go:130] > GitCommit:      unknown
	I0719 19:05:34.326105   51196 command_runner.go:130] > GitCommitDate:  unknown
	I0719 19:05:34.326109   51196 command_runner.go:130] > GitTreeState:   clean
	I0719 19:05:34.326115   51196 command_runner.go:130] > BuildDate:      2024-07-18T22:57:15Z
	I0719 19:05:34.326119   51196 command_runner.go:130] > GoVersion:      go1.21.6
	I0719 19:05:34.326123   51196 command_runner.go:130] > Compiler:       gc
	I0719 19:05:34.326127   51196 command_runner.go:130] > Platform:       linux/amd64
	I0719 19:05:34.326131   51196 command_runner.go:130] > Linkmode:       dynamic
	I0719 19:05:34.326144   51196 command_runner.go:130] > BuildTags:      
	I0719 19:05:34.326151   51196 command_runner.go:130] >   containers_image_ostree_stub
	I0719 19:05:34.326162   51196 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0719 19:05:34.326168   51196 command_runner.go:130] >   btrfs_noversion
	I0719 19:05:34.326174   51196 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0719 19:05:34.326181   51196 command_runner.go:130] >   libdm_no_deferred_remove
	I0719 19:05:34.326186   51196 command_runner.go:130] >   seccomp
	I0719 19:05:34.326196   51196 command_runner.go:130] > LDFlags:          unknown
	I0719 19:05:34.326202   51196 command_runner.go:130] > SeccompEnabled:   true
	I0719 19:05:34.326208   51196 command_runner.go:130] > AppArmorEnabled:  false
	I0719 19:05:34.328653   51196 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 19:05:34.329808   51196 main.go:141] libmachine: (multinode-269079) Calling .GetIP
	I0719 19:05:34.332208   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:05:34.332587   51196 main.go:141] libmachine: (multinode-269079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:cc:9c", ip: ""} in network mk-multinode-269079: {Iface:virbr1 ExpiryTime:2024-07-19 19:58:29 +0000 UTC Type:0 Mac:52:54:00:e2:cc:9c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:multinode-269079 Clientid:01:52:54:00:e2:cc:9c}
	I0719 19:05:34.332616   51196 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined IP address 192.168.39.154 and MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:05:34.332910   51196 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 19:05:34.336749   51196 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0719 19:05:34.336990   51196 kubeadm.go:883] updating cluster {Name:multinode-269079 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-269079 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.165 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 19:05:34.337119   51196 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 19:05:34.337158   51196 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:05:34.377182   51196 command_runner.go:130] > {
	I0719 19:05:34.377200   51196 command_runner.go:130] >   "images": [
	I0719 19:05:34.377205   51196 command_runner.go:130] >     {
	I0719 19:05:34.377212   51196 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0719 19:05:34.377217   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.377224   51196 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0719 19:05:34.377228   51196 command_runner.go:130] >       ],
	I0719 19:05:34.377232   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.377240   51196 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0719 19:05:34.377248   51196 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0719 19:05:34.377252   51196 command_runner.go:130] >       ],
	I0719 19:05:34.377256   51196 command_runner.go:130] >       "size": "87165492",
	I0719 19:05:34.377260   51196 command_runner.go:130] >       "uid": null,
	I0719 19:05:34.377267   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.377272   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.377278   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.377282   51196 command_runner.go:130] >     },
	I0719 19:05:34.377285   51196 command_runner.go:130] >     {
	I0719 19:05:34.377292   51196 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0719 19:05:34.377296   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.377302   51196 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0719 19:05:34.377305   51196 command_runner.go:130] >       ],
	I0719 19:05:34.377310   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.377316   51196 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0719 19:05:34.377324   51196 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0719 19:05:34.377329   51196 command_runner.go:130] >       ],
	I0719 19:05:34.377336   51196 command_runner.go:130] >       "size": "87174707",
	I0719 19:05:34.377339   51196 command_runner.go:130] >       "uid": null,
	I0719 19:05:34.377360   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.377364   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.377370   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.377374   51196 command_runner.go:130] >     },
	I0719 19:05:34.377386   51196 command_runner.go:130] >     {
	I0719 19:05:34.377394   51196 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0719 19:05:34.377398   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.377404   51196 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0719 19:05:34.377408   51196 command_runner.go:130] >       ],
	I0719 19:05:34.377412   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.377421   51196 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0719 19:05:34.377429   51196 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0719 19:05:34.377434   51196 command_runner.go:130] >       ],
	I0719 19:05:34.377438   51196 command_runner.go:130] >       "size": "1363676",
	I0719 19:05:34.377443   51196 command_runner.go:130] >       "uid": null,
	I0719 19:05:34.377447   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.377453   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.377457   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.377462   51196 command_runner.go:130] >     },
	I0719 19:05:34.377465   51196 command_runner.go:130] >     {
	I0719 19:05:34.377471   51196 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0719 19:05:34.377477   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.377482   51196 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0719 19:05:34.377488   51196 command_runner.go:130] >       ],
	I0719 19:05:34.377492   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.377501   51196 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0719 19:05:34.377518   51196 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0719 19:05:34.377524   51196 command_runner.go:130] >       ],
	I0719 19:05:34.377528   51196 command_runner.go:130] >       "size": "31470524",
	I0719 19:05:34.377534   51196 command_runner.go:130] >       "uid": null,
	I0719 19:05:34.377538   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.377546   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.377551   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.377558   51196 command_runner.go:130] >     },
	I0719 19:05:34.377564   51196 command_runner.go:130] >     {
	I0719 19:05:34.377570   51196 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0719 19:05:34.377576   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.377581   51196 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0719 19:05:34.377584   51196 command_runner.go:130] >       ],
	I0719 19:05:34.377588   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.377599   51196 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0719 19:05:34.377608   51196 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0719 19:05:34.377614   51196 command_runner.go:130] >       ],
	I0719 19:05:34.377618   51196 command_runner.go:130] >       "size": "61245718",
	I0719 19:05:34.377624   51196 command_runner.go:130] >       "uid": null,
	I0719 19:05:34.377628   51196 command_runner.go:130] >       "username": "nonroot",
	I0719 19:05:34.377633   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.377638   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.377643   51196 command_runner.go:130] >     },
	I0719 19:05:34.377646   51196 command_runner.go:130] >     {
	I0719 19:05:34.377654   51196 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0719 19:05:34.377659   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.377663   51196 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0719 19:05:34.377669   51196 command_runner.go:130] >       ],
	I0719 19:05:34.377673   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.377682   51196 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0719 19:05:34.377691   51196 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0719 19:05:34.377696   51196 command_runner.go:130] >       ],
	I0719 19:05:34.377700   51196 command_runner.go:130] >       "size": "150779692",
	I0719 19:05:34.377706   51196 command_runner.go:130] >       "uid": {
	I0719 19:05:34.377710   51196 command_runner.go:130] >         "value": "0"
	I0719 19:05:34.377716   51196 command_runner.go:130] >       },
	I0719 19:05:34.377720   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.377726   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.377730   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.377736   51196 command_runner.go:130] >     },
	I0719 19:05:34.377739   51196 command_runner.go:130] >     {
	I0719 19:05:34.377745   51196 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0719 19:05:34.377751   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.377760   51196 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0719 19:05:34.377766   51196 command_runner.go:130] >       ],
	I0719 19:05:34.377770   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.377779   51196 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0719 19:05:34.377788   51196 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0719 19:05:34.377793   51196 command_runner.go:130] >       ],
	I0719 19:05:34.377797   51196 command_runner.go:130] >       "size": "117609954",
	I0719 19:05:34.377803   51196 command_runner.go:130] >       "uid": {
	I0719 19:05:34.377806   51196 command_runner.go:130] >         "value": "0"
	I0719 19:05:34.377812   51196 command_runner.go:130] >       },
	I0719 19:05:34.377816   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.377822   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.377826   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.377832   51196 command_runner.go:130] >     },
	I0719 19:05:34.377836   51196 command_runner.go:130] >     {
	I0719 19:05:34.377842   51196 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0719 19:05:34.377846   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.377853   51196 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0719 19:05:34.377859   51196 command_runner.go:130] >       ],
	I0719 19:05:34.377863   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.377883   51196 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0719 19:05:34.377892   51196 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0719 19:05:34.377895   51196 command_runner.go:130] >       ],
	I0719 19:05:34.377899   51196 command_runner.go:130] >       "size": "112198984",
	I0719 19:05:34.377902   51196 command_runner.go:130] >       "uid": {
	I0719 19:05:34.377906   51196 command_runner.go:130] >         "value": "0"
	I0719 19:05:34.377910   51196 command_runner.go:130] >       },
	I0719 19:05:34.377914   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.377917   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.377921   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.377924   51196 command_runner.go:130] >     },
	I0719 19:05:34.377927   51196 command_runner.go:130] >     {
	I0719 19:05:34.377933   51196 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0719 19:05:34.377936   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.377941   51196 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0719 19:05:34.377944   51196 command_runner.go:130] >       ],
	I0719 19:05:34.377952   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.377959   51196 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0719 19:05:34.377965   51196 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0719 19:05:34.377968   51196 command_runner.go:130] >       ],
	I0719 19:05:34.377975   51196 command_runner.go:130] >       "size": "85953945",
	I0719 19:05:34.377978   51196 command_runner.go:130] >       "uid": null,
	I0719 19:05:34.377981   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.377985   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.377988   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.377991   51196 command_runner.go:130] >     },
	I0719 19:05:34.377994   51196 command_runner.go:130] >     {
	I0719 19:05:34.378000   51196 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0719 19:05:34.378006   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.378011   51196 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0719 19:05:34.378016   51196 command_runner.go:130] >       ],
	I0719 19:05:34.378020   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.378027   51196 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0719 19:05:34.378036   51196 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0719 19:05:34.378041   51196 command_runner.go:130] >       ],
	I0719 19:05:34.378045   51196 command_runner.go:130] >       "size": "63051080",
	I0719 19:05:34.378051   51196 command_runner.go:130] >       "uid": {
	I0719 19:05:34.378054   51196 command_runner.go:130] >         "value": "0"
	I0719 19:05:34.378060   51196 command_runner.go:130] >       },
	I0719 19:05:34.378064   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.378069   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.378073   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.378079   51196 command_runner.go:130] >     },
	I0719 19:05:34.378082   51196 command_runner.go:130] >     {
	I0719 19:05:34.378091   51196 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0719 19:05:34.378096   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.378101   51196 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0719 19:05:34.378107   51196 command_runner.go:130] >       ],
	I0719 19:05:34.378111   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.378120   51196 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0719 19:05:34.378128   51196 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0719 19:05:34.378134   51196 command_runner.go:130] >       ],
	I0719 19:05:34.378138   51196 command_runner.go:130] >       "size": "750414",
	I0719 19:05:34.378144   51196 command_runner.go:130] >       "uid": {
	I0719 19:05:34.378148   51196 command_runner.go:130] >         "value": "65535"
	I0719 19:05:34.378153   51196 command_runner.go:130] >       },
	I0719 19:05:34.378157   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.378162   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.378166   51196 command_runner.go:130] >       "pinned": true
	I0719 19:05:34.378171   51196 command_runner.go:130] >     }
	I0719 19:05:34.378175   51196 command_runner.go:130] >   ]
	I0719 19:05:34.378180   51196 command_runner.go:130] > }
	I0719 19:05:34.378786   51196 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 19:05:34.378801   51196 crio.go:433] Images already preloaded, skipping extraction
	I0719 19:05:34.378848   51196 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:05:34.408674   51196 command_runner.go:130] > {
	I0719 19:05:34.408692   51196 command_runner.go:130] >   "images": [
	I0719 19:05:34.408696   51196 command_runner.go:130] >     {
	I0719 19:05:34.408704   51196 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0719 19:05:34.408708   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.408714   51196 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0719 19:05:34.408718   51196 command_runner.go:130] >       ],
	I0719 19:05:34.408722   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.408730   51196 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0719 19:05:34.408737   51196 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0719 19:05:34.408741   51196 command_runner.go:130] >       ],
	I0719 19:05:34.408745   51196 command_runner.go:130] >       "size": "87165492",
	I0719 19:05:34.408749   51196 command_runner.go:130] >       "uid": null,
	I0719 19:05:34.408753   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.408761   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.408771   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.408777   51196 command_runner.go:130] >     },
	I0719 19:05:34.408780   51196 command_runner.go:130] >     {
	I0719 19:05:34.408786   51196 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0719 19:05:34.408792   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.408797   51196 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0719 19:05:34.408801   51196 command_runner.go:130] >       ],
	I0719 19:05:34.408805   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.408814   51196 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0719 19:05:34.408821   51196 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0719 19:05:34.408827   51196 command_runner.go:130] >       ],
	I0719 19:05:34.408831   51196 command_runner.go:130] >       "size": "87174707",
	I0719 19:05:34.408835   51196 command_runner.go:130] >       "uid": null,
	I0719 19:05:34.408843   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.408849   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.408853   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.408856   51196 command_runner.go:130] >     },
	I0719 19:05:34.408860   51196 command_runner.go:130] >     {
	I0719 19:05:34.408866   51196 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0719 19:05:34.408872   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.408877   51196 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0719 19:05:34.408884   51196 command_runner.go:130] >       ],
	I0719 19:05:34.408889   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.408898   51196 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0719 19:05:34.408907   51196 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0719 19:05:34.408913   51196 command_runner.go:130] >       ],
	I0719 19:05:34.408917   51196 command_runner.go:130] >       "size": "1363676",
	I0719 19:05:34.408923   51196 command_runner.go:130] >       "uid": null,
	I0719 19:05:34.408927   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.408936   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.408943   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.408947   51196 command_runner.go:130] >     },
	I0719 19:05:34.408950   51196 command_runner.go:130] >     {
	I0719 19:05:34.408956   51196 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0719 19:05:34.408962   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.408967   51196 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0719 19:05:34.408978   51196 command_runner.go:130] >       ],
	I0719 19:05:34.408986   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.408994   51196 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0719 19:05:34.409009   51196 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0719 19:05:34.409015   51196 command_runner.go:130] >       ],
	I0719 19:05:34.409019   51196 command_runner.go:130] >       "size": "31470524",
	I0719 19:05:34.409025   51196 command_runner.go:130] >       "uid": null,
	I0719 19:05:34.409029   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.409035   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.409038   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.409044   51196 command_runner.go:130] >     },
	I0719 19:05:34.409047   51196 command_runner.go:130] >     {
	I0719 19:05:34.409055   51196 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0719 19:05:34.409060   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.409065   51196 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0719 19:05:34.409073   51196 command_runner.go:130] >       ],
	I0719 19:05:34.409077   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.409086   51196 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0719 19:05:34.409096   51196 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0719 19:05:34.409101   51196 command_runner.go:130] >       ],
	I0719 19:05:34.409105   51196 command_runner.go:130] >       "size": "61245718",
	I0719 19:05:34.409111   51196 command_runner.go:130] >       "uid": null,
	I0719 19:05:34.409116   51196 command_runner.go:130] >       "username": "nonroot",
	I0719 19:05:34.409121   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.409126   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.409131   51196 command_runner.go:130] >     },
	I0719 19:05:34.409135   51196 command_runner.go:130] >     {
	I0719 19:05:34.409144   51196 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0719 19:05:34.409150   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.409155   51196 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0719 19:05:34.409161   51196 command_runner.go:130] >       ],
	I0719 19:05:34.409166   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.409174   51196 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0719 19:05:34.409181   51196 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0719 19:05:34.409186   51196 command_runner.go:130] >       ],
	I0719 19:05:34.409190   51196 command_runner.go:130] >       "size": "150779692",
	I0719 19:05:34.409200   51196 command_runner.go:130] >       "uid": {
	I0719 19:05:34.409206   51196 command_runner.go:130] >         "value": "0"
	I0719 19:05:34.409212   51196 command_runner.go:130] >       },
	I0719 19:05:34.409218   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.409222   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.409228   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.409232   51196 command_runner.go:130] >     },
	I0719 19:05:34.409237   51196 command_runner.go:130] >     {
	I0719 19:05:34.409243   51196 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0719 19:05:34.409249   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.409254   51196 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0719 19:05:34.409259   51196 command_runner.go:130] >       ],
	I0719 19:05:34.409263   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.409272   51196 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0719 19:05:34.409290   51196 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0719 19:05:34.409296   51196 command_runner.go:130] >       ],
	I0719 19:05:34.409300   51196 command_runner.go:130] >       "size": "117609954",
	I0719 19:05:34.409306   51196 command_runner.go:130] >       "uid": {
	I0719 19:05:34.409310   51196 command_runner.go:130] >         "value": "0"
	I0719 19:05:34.409315   51196 command_runner.go:130] >       },
	I0719 19:05:34.409319   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.409325   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.409329   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.409334   51196 command_runner.go:130] >     },
	I0719 19:05:34.409338   51196 command_runner.go:130] >     {
	I0719 19:05:34.409354   51196 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0719 19:05:34.409365   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.409370   51196 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0719 19:05:34.409374   51196 command_runner.go:130] >       ],
	I0719 19:05:34.409377   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.409425   51196 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0719 19:05:34.409437   51196 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0719 19:05:34.409441   51196 command_runner.go:130] >       ],
	I0719 19:05:34.409445   51196 command_runner.go:130] >       "size": "112198984",
	I0719 19:05:34.409451   51196 command_runner.go:130] >       "uid": {
	I0719 19:05:34.409454   51196 command_runner.go:130] >         "value": "0"
	I0719 19:05:34.409465   51196 command_runner.go:130] >       },
	I0719 19:05:34.409471   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.409475   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.409481   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.409484   51196 command_runner.go:130] >     },
	I0719 19:05:34.409490   51196 command_runner.go:130] >     {
	I0719 19:05:34.409496   51196 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0719 19:05:34.409502   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.409507   51196 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0719 19:05:34.409513   51196 command_runner.go:130] >       ],
	I0719 19:05:34.409517   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.409526   51196 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0719 19:05:34.409537   51196 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0719 19:05:34.409543   51196 command_runner.go:130] >       ],
	I0719 19:05:34.409547   51196 command_runner.go:130] >       "size": "85953945",
	I0719 19:05:34.409553   51196 command_runner.go:130] >       "uid": null,
	I0719 19:05:34.409558   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.409564   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.409568   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.409574   51196 command_runner.go:130] >     },
	I0719 19:05:34.409577   51196 command_runner.go:130] >     {
	I0719 19:05:34.409585   51196 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0719 19:05:34.409589   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.409596   51196 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0719 19:05:34.409601   51196 command_runner.go:130] >       ],
	I0719 19:05:34.409605   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.409614   51196 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0719 19:05:34.409623   51196 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0719 19:05:34.409629   51196 command_runner.go:130] >       ],
	I0719 19:05:34.409633   51196 command_runner.go:130] >       "size": "63051080",
	I0719 19:05:34.409639   51196 command_runner.go:130] >       "uid": {
	I0719 19:05:34.409642   51196 command_runner.go:130] >         "value": "0"
	I0719 19:05:34.409648   51196 command_runner.go:130] >       },
	I0719 19:05:34.409652   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.409658   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.409662   51196 command_runner.go:130] >       "pinned": false
	I0719 19:05:34.409673   51196 command_runner.go:130] >     },
	I0719 19:05:34.409679   51196 command_runner.go:130] >     {
	I0719 19:05:34.409684   51196 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0719 19:05:34.409690   51196 command_runner.go:130] >       "repoTags": [
	I0719 19:05:34.409695   51196 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0719 19:05:34.409701   51196 command_runner.go:130] >       ],
	I0719 19:05:34.409705   51196 command_runner.go:130] >       "repoDigests": [
	I0719 19:05:34.409713   51196 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0719 19:05:34.409722   51196 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0719 19:05:34.409728   51196 command_runner.go:130] >       ],
	I0719 19:05:34.409732   51196 command_runner.go:130] >       "size": "750414",
	I0719 19:05:34.409735   51196 command_runner.go:130] >       "uid": {
	I0719 19:05:34.409739   51196 command_runner.go:130] >         "value": "65535"
	I0719 19:05:34.409745   51196 command_runner.go:130] >       },
	I0719 19:05:34.409748   51196 command_runner.go:130] >       "username": "",
	I0719 19:05:34.409755   51196 command_runner.go:130] >       "spec": null,
	I0719 19:05:34.409759   51196 command_runner.go:130] >       "pinned": true
	I0719 19:05:34.409764   51196 command_runner.go:130] >     }
	I0719 19:05:34.409767   51196 command_runner.go:130] >   ]
	I0719 19:05:34.409773   51196 command_runner.go:130] > }
	I0719 19:05:34.410143   51196 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 19:05:34.410165   51196 cache_images.go:84] Images are preloaded, skipping loading
	I0719 19:05:34.410174   51196 kubeadm.go:934] updating node { 192.168.39.154 8443 v1.30.3 crio true true} ...
	I0719 19:05:34.410299   51196 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-269079 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.154
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-269079 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 19:05:34.410372   51196 ssh_runner.go:195] Run: crio config
	I0719 19:05:34.449879   51196 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0719 19:05:34.449904   51196 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0719 19:05:34.449916   51196 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0719 19:05:34.449921   51196 command_runner.go:130] > #
	I0719 19:05:34.449931   51196 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0719 19:05:34.449940   51196 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0719 19:05:34.449949   51196 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0719 19:05:34.449984   51196 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0719 19:05:34.449995   51196 command_runner.go:130] > # reload'.
	I0719 19:05:34.450005   51196 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0719 19:05:34.450017   51196 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0719 19:05:34.450031   51196 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0719 19:05:34.450044   51196 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0719 19:05:34.450051   51196 command_runner.go:130] > [crio]
	I0719 19:05:34.450062   51196 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0719 19:05:34.450075   51196 command_runner.go:130] > # containers images, in this directory.
	I0719 19:05:34.450094   51196 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0719 19:05:34.450117   51196 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0719 19:05:34.450129   51196 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0719 19:05:34.450141   51196 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0719 19:05:34.450151   51196 command_runner.go:130] > # imagestore = ""
	I0719 19:05:34.450164   51196 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0719 19:05:34.450178   51196 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0719 19:05:34.450216   51196 command_runner.go:130] > storage_driver = "overlay"
	I0719 19:05:34.450233   51196 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0719 19:05:34.450244   51196 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0719 19:05:34.450254   51196 command_runner.go:130] > storage_option = [
	I0719 19:05:34.450263   51196 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0719 19:05:34.450273   51196 command_runner.go:130] > ]
	I0719 19:05:34.450285   51196 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0719 19:05:34.450304   51196 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0719 19:05:34.450315   51196 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0719 19:05:34.450327   51196 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0719 19:05:34.450340   51196 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0719 19:05:34.450348   51196 command_runner.go:130] > # always happen on a node reboot
	I0719 19:05:34.450360   51196 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0719 19:05:34.450384   51196 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0719 19:05:34.450398   51196 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0719 19:05:34.450409   51196 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0719 19:05:34.450417   51196 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0719 19:05:34.450433   51196 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0719 19:05:34.450449   51196 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0719 19:05:34.450459   51196 command_runner.go:130] > # internal_wipe = true
	I0719 19:05:34.450473   51196 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0719 19:05:34.450486   51196 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0719 19:05:34.450495   51196 command_runner.go:130] > # internal_repair = false
	I0719 19:05:34.450508   51196 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0719 19:05:34.450520   51196 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0719 19:05:34.450533   51196 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0719 19:05:34.450542   51196 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0719 19:05:34.450564   51196 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0719 19:05:34.450574   51196 command_runner.go:130] > [crio.api]
	I0719 19:05:34.450592   51196 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0719 19:05:34.450606   51196 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0719 19:05:34.450617   51196 command_runner.go:130] > # IP address on which the stream server will listen.
	I0719 19:05:34.450627   51196 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0719 19:05:34.450640   51196 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0719 19:05:34.450652   51196 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0719 19:05:34.450663   51196 command_runner.go:130] > # stream_port = "0"
	I0719 19:05:34.450673   51196 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0719 19:05:34.450684   51196 command_runner.go:130] > # stream_enable_tls = false
	I0719 19:05:34.450695   51196 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0719 19:05:34.450705   51196 command_runner.go:130] > # stream_idle_timeout = ""
	I0719 19:05:34.450716   51196 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0719 19:05:34.450729   51196 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0719 19:05:34.450739   51196 command_runner.go:130] > # minutes.
	I0719 19:05:34.450746   51196 command_runner.go:130] > # stream_tls_cert = ""
	I0719 19:05:34.450761   51196 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0719 19:05:34.450775   51196 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0719 19:05:34.450783   51196 command_runner.go:130] > # stream_tls_key = ""
	I0719 19:05:34.450796   51196 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0719 19:05:34.450810   51196 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0719 19:05:34.450841   51196 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0719 19:05:34.450852   51196 command_runner.go:130] > # stream_tls_ca = ""
	I0719 19:05:34.450865   51196 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0719 19:05:34.450875   51196 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0719 19:05:34.450890   51196 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0719 19:05:34.450901   51196 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0719 19:05:34.450914   51196 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0719 19:05:34.450927   51196 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0719 19:05:34.450937   51196 command_runner.go:130] > [crio.runtime]
	I0719 19:05:34.450948   51196 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0719 19:05:34.450961   51196 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0719 19:05:34.450975   51196 command_runner.go:130] > # "nofile=1024:2048"
	I0719 19:05:34.450989   51196 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0719 19:05:34.450999   51196 command_runner.go:130] > # default_ulimits = [
	I0719 19:05:34.451006   51196 command_runner.go:130] > # ]
	I0719 19:05:34.451018   51196 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0719 19:05:34.451034   51196 command_runner.go:130] > # no_pivot = false
	I0719 19:05:34.451045   51196 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0719 19:05:34.451059   51196 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0719 19:05:34.451071   51196 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0719 19:05:34.451085   51196 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0719 19:05:34.451097   51196 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0719 19:05:34.451109   51196 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0719 19:05:34.451120   51196 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0719 19:05:34.451128   51196 command_runner.go:130] > # Cgroup setting for conmon
	I0719 19:05:34.451142   51196 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0719 19:05:34.451153   51196 command_runner.go:130] > conmon_cgroup = "pod"
	I0719 19:05:34.451166   51196 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0719 19:05:34.451178   51196 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0719 19:05:34.451191   51196 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0719 19:05:34.451201   51196 command_runner.go:130] > conmon_env = [
	I0719 19:05:34.451215   51196 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0719 19:05:34.451227   51196 command_runner.go:130] > ]
	I0719 19:05:34.451240   51196 command_runner.go:130] > # Additional environment variables to set for all the
	I0719 19:05:34.451252   51196 command_runner.go:130] > # containers. These are overridden if set in the
	I0719 19:05:34.451264   51196 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0719 19:05:34.451274   51196 command_runner.go:130] > # default_env = [
	I0719 19:05:34.451281   51196 command_runner.go:130] > # ]
	I0719 19:05:34.451291   51196 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0719 19:05:34.451305   51196 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0719 19:05:34.451311   51196 command_runner.go:130] > # selinux = false
	I0719 19:05:34.451321   51196 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0719 19:05:34.451331   51196 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0719 19:05:34.451341   51196 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0719 19:05:34.451353   51196 command_runner.go:130] > # seccomp_profile = ""
	I0719 19:05:34.451365   51196 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0719 19:05:34.451377   51196 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0719 19:05:34.451388   51196 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0719 19:05:34.451399   51196 command_runner.go:130] > # which might increase security.
	I0719 19:05:34.451410   51196 command_runner.go:130] > # This option is currently deprecated,
	I0719 19:05:34.451420   51196 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0719 19:05:34.451431   51196 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0719 19:05:34.451448   51196 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0719 19:05:34.451463   51196 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0719 19:05:34.451477   51196 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0719 19:05:34.451491   51196 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0719 19:05:34.451502   51196 command_runner.go:130] > # This option supports live configuration reload.
	I0719 19:05:34.451516   51196 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0719 19:05:34.451530   51196 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0719 19:05:34.451540   51196 command_runner.go:130] > # the cgroup blockio controller.
	I0719 19:05:34.451549   51196 command_runner.go:130] > # blockio_config_file = ""
	I0719 19:05:34.451566   51196 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0719 19:05:34.451576   51196 command_runner.go:130] > # blockio parameters.
	I0719 19:05:34.451586   51196 command_runner.go:130] > # blockio_reload = false
	I0719 19:05:34.451601   51196 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0719 19:05:34.451611   51196 command_runner.go:130] > # irqbalance daemon.
	I0719 19:05:34.451623   51196 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0719 19:05:34.451636   51196 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0719 19:05:34.451650   51196 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0719 19:05:34.451664   51196 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0719 19:05:34.451691   51196 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0719 19:05:34.451708   51196 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0719 19:05:34.451719   51196 command_runner.go:130] > # This option supports live configuration reload.
	I0719 19:05:34.451730   51196 command_runner.go:130] > # rdt_config_file = ""
	I0719 19:05:34.451741   51196 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0719 19:05:34.451754   51196 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0719 19:05:34.451775   51196 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0719 19:05:34.451783   51196 command_runner.go:130] > # separate_pull_cgroup = ""
	I0719 19:05:34.451793   51196 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0719 19:05:34.451807   51196 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0719 19:05:34.451817   51196 command_runner.go:130] > # will be added.
	I0719 19:05:34.451825   51196 command_runner.go:130] > # default_capabilities = [
	I0719 19:05:34.451853   51196 command_runner.go:130] > # 	"CHOWN",
	I0719 19:05:34.451865   51196 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0719 19:05:34.451871   51196 command_runner.go:130] > # 	"FSETID",
	I0719 19:05:34.451878   51196 command_runner.go:130] > # 	"FOWNER",
	I0719 19:05:34.451887   51196 command_runner.go:130] > # 	"SETGID",
	I0719 19:05:34.451894   51196 command_runner.go:130] > # 	"SETUID",
	I0719 19:05:34.451905   51196 command_runner.go:130] > # 	"SETPCAP",
	I0719 19:05:34.451917   51196 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0719 19:05:34.451923   51196 command_runner.go:130] > # 	"KILL",
	I0719 19:05:34.451933   51196 command_runner.go:130] > # ]
	I0719 19:05:34.451943   51196 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0719 19:05:34.451957   51196 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0719 19:05:34.451969   51196 command_runner.go:130] > # add_inheritable_capabilities = false
	I0719 19:05:34.451980   51196 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0719 19:05:34.451995   51196 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0719 19:05:34.452005   51196 command_runner.go:130] > default_sysctls = [
	I0719 19:05:34.452019   51196 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0719 19:05:34.452028   51196 command_runner.go:130] > ]
	I0719 19:05:34.452036   51196 command_runner.go:130] > # List of devices on the host that a
	I0719 19:05:34.452047   51196 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0719 19:05:34.452059   51196 command_runner.go:130] > # allowed_devices = [
	I0719 19:05:34.452070   51196 command_runner.go:130] > # 	"/dev/fuse",
	I0719 19:05:34.452076   51196 command_runner.go:130] > # ]
	I0719 19:05:34.452089   51196 command_runner.go:130] > # List of additional devices. specified as
	I0719 19:05:34.452104   51196 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0719 19:05:34.452118   51196 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0719 19:05:34.452131   51196 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0719 19:05:34.452140   51196 command_runner.go:130] > # additional_devices = [
	I0719 19:05:34.452144   51196 command_runner.go:130] > # ]
	I0719 19:05:34.452156   51196 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0719 19:05:34.452171   51196 command_runner.go:130] > # cdi_spec_dirs = [
	I0719 19:05:34.452180   51196 command_runner.go:130] > # 	"/etc/cdi",
	I0719 19:05:34.452191   51196 command_runner.go:130] > # 	"/var/run/cdi",
	I0719 19:05:34.452201   51196 command_runner.go:130] > # ]
	I0719 19:05:34.452212   51196 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0719 19:05:34.452227   51196 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0719 19:05:34.452237   51196 command_runner.go:130] > # Defaults to false.
	I0719 19:05:34.452243   51196 command_runner.go:130] > # device_ownership_from_security_context = false
	I0719 19:05:34.452257   51196 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0719 19:05:34.452270   51196 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0719 19:05:34.452282   51196 command_runner.go:130] > # hooks_dir = [
	I0719 19:05:34.452293   51196 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0719 19:05:34.452304   51196 command_runner.go:130] > # ]
	I0719 19:05:34.452319   51196 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0719 19:05:34.452333   51196 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0719 19:05:34.452343   51196 command_runner.go:130] > # its default mounts from the following two files:
	I0719 19:05:34.452348   51196 command_runner.go:130] > #
	I0719 19:05:34.452362   51196 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0719 19:05:34.452376   51196 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0719 19:05:34.452389   51196 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0719 19:05:34.452398   51196 command_runner.go:130] > #
	I0719 19:05:34.452410   51196 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0719 19:05:34.452424   51196 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0719 19:05:34.452436   51196 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0719 19:05:34.452448   51196 command_runner.go:130] > #      only add mounts it finds in this file.
	I0719 19:05:34.452459   51196 command_runner.go:130] > #
	I0719 19:05:34.452470   51196 command_runner.go:130] > # default_mounts_file = ""
	I0719 19:05:34.452479   51196 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0719 19:05:34.452493   51196 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0719 19:05:34.452503   51196 command_runner.go:130] > pids_limit = 1024
	I0719 19:05:34.452512   51196 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0719 19:05:34.452523   51196 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0719 19:05:34.452536   51196 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0719 19:05:34.452561   51196 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0719 19:05:34.452572   51196 command_runner.go:130] > # log_size_max = -1
	I0719 19:05:34.452587   51196 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0719 19:05:34.452598   51196 command_runner.go:130] > # log_to_journald = false
	I0719 19:05:34.452605   51196 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0719 19:05:34.452616   51196 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0719 19:05:34.452629   51196 command_runner.go:130] > # Path to directory for container attach sockets.
	I0719 19:05:34.452643   51196 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0719 19:05:34.452657   51196 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0719 19:05:34.452668   51196 command_runner.go:130] > # bind_mount_prefix = ""
	I0719 19:05:34.452682   51196 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0719 19:05:34.452692   51196 command_runner.go:130] > # read_only = false
	I0719 19:05:34.452700   51196 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0719 19:05:34.452709   51196 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0719 19:05:34.452717   51196 command_runner.go:130] > # live configuration reload.
	I0719 19:05:34.452726   51196 command_runner.go:130] > # log_level = "info"
	I0719 19:05:34.452736   51196 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0719 19:05:34.452744   51196 command_runner.go:130] > # This option supports live configuration reload.
	I0719 19:05:34.452750   51196 command_runner.go:130] > # log_filter = ""
	I0719 19:05:34.452759   51196 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0719 19:05:34.452769   51196 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0719 19:05:34.452775   51196 command_runner.go:130] > # separated by comma.
	I0719 19:05:34.452785   51196 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0719 19:05:34.452792   51196 command_runner.go:130] > # uid_mappings = ""
	I0719 19:05:34.452802   51196 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0719 19:05:34.452816   51196 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0719 19:05:34.452825   51196 command_runner.go:130] > # separated by comma.
	I0719 19:05:34.452840   51196 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0719 19:05:34.452851   51196 command_runner.go:130] > # gid_mappings = ""
	I0719 19:05:34.452864   51196 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0719 19:05:34.452877   51196 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0719 19:05:34.452891   51196 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0719 19:05:34.452904   51196 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0719 19:05:34.452915   51196 command_runner.go:130] > # minimum_mappable_uid = -1
	I0719 19:05:34.452929   51196 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0719 19:05:34.452943   51196 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0719 19:05:34.452956   51196 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0719 19:05:34.452968   51196 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0719 19:05:34.452977   51196 command_runner.go:130] > # minimum_mappable_gid = -1
	I0719 19:05:34.452991   51196 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0719 19:05:34.453001   51196 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0719 19:05:34.453015   51196 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0719 19:05:34.453029   51196 command_runner.go:130] > # ctr_stop_timeout = 30
	I0719 19:05:34.453045   51196 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0719 19:05:34.453058   51196 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0719 19:05:34.453068   51196 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0719 19:05:34.453077   51196 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0719 19:05:34.453088   51196 command_runner.go:130] > drop_infra_ctr = false
	I0719 19:05:34.453099   51196 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0719 19:05:34.453112   51196 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0719 19:05:34.453127   51196 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0719 19:05:34.453140   51196 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0719 19:05:34.453155   51196 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0719 19:05:34.453166   51196 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0719 19:05:34.453179   51196 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0719 19:05:34.453192   51196 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0719 19:05:34.453200   51196 command_runner.go:130] > # shared_cpuset = ""
	I0719 19:05:34.453214   51196 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0719 19:05:34.453227   51196 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0719 19:05:34.453238   51196 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0719 19:05:34.453249   51196 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0719 19:05:34.453260   51196 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0719 19:05:34.453270   51196 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0719 19:05:34.453284   51196 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0719 19:05:34.453296   51196 command_runner.go:130] > # enable_criu_support = false
	I0719 19:05:34.453304   51196 command_runner.go:130] > # Enable/disable the generation of the container,
	I0719 19:05:34.453318   51196 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0719 19:05:34.453330   51196 command_runner.go:130] > # enable_pod_events = false
	I0719 19:05:34.453344   51196 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0719 19:05:34.453359   51196 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0719 19:05:34.453371   51196 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0719 19:05:34.453380   51196 command_runner.go:130] > # default_runtime = "runc"
	I0719 19:05:34.453388   51196 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0719 19:05:34.453405   51196 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0719 19:05:34.453422   51196 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0719 19:05:34.453435   51196 command_runner.go:130] > # creation as a file is not desired either.
	I0719 19:05:34.453452   51196 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0719 19:05:34.453468   51196 command_runner.go:130] > # the hostname is being managed dynamically.
	I0719 19:05:34.453480   51196 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0719 19:05:34.453489   51196 command_runner.go:130] > # ]
	I0719 19:05:34.453500   51196 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0719 19:05:34.453515   51196 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0719 19:05:34.453529   51196 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0719 19:05:34.453541   51196 command_runner.go:130] > # Each entry in the table should follow the format:
	I0719 19:05:34.453550   51196 command_runner.go:130] > #
	I0719 19:05:34.453562   51196 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0719 19:05:34.453572   51196 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0719 19:05:34.453596   51196 command_runner.go:130] > # runtime_type = "oci"
	I0719 19:05:34.453608   51196 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0719 19:05:34.453617   51196 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0719 19:05:34.453628   51196 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0719 19:05:34.453640   51196 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0719 19:05:34.453651   51196 command_runner.go:130] > # monitor_env = []
	I0719 19:05:34.453663   51196 command_runner.go:130] > # privileged_without_host_devices = false
	I0719 19:05:34.453673   51196 command_runner.go:130] > # allowed_annotations = []
	I0719 19:05:34.453683   51196 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0719 19:05:34.453689   51196 command_runner.go:130] > # Where:
	I0719 19:05:34.453701   51196 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0719 19:05:34.453715   51196 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0719 19:05:34.453730   51196 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0719 19:05:34.453744   51196 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0719 19:05:34.453755   51196 command_runner.go:130] > #   in $PATH.
	I0719 19:05:34.453766   51196 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0719 19:05:34.453778   51196 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0719 19:05:34.453788   51196 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0719 19:05:34.453799   51196 command_runner.go:130] > #   state.
	I0719 19:05:34.453814   51196 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0719 19:05:34.453828   51196 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0719 19:05:34.453842   51196 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0719 19:05:34.453855   51196 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0719 19:05:34.453869   51196 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0719 19:05:34.453882   51196 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0719 19:05:34.453892   51196 command_runner.go:130] > #   The currently recognized values are:
	I0719 19:05:34.453907   51196 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0719 19:05:34.453923   51196 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0719 19:05:34.453940   51196 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0719 19:05:34.453955   51196 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0719 19:05:34.453968   51196 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0719 19:05:34.453981   51196 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0719 19:05:34.453996   51196 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0719 19:05:34.454011   51196 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0719 19:05:34.454025   51196 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0719 19:05:34.454039   51196 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0719 19:05:34.454052   51196 command_runner.go:130] > #   deprecated option "conmon".
	I0719 19:05:34.454068   51196 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0719 19:05:34.454080   51196 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0719 19:05:34.454094   51196 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0719 19:05:34.454107   51196 command_runner.go:130] > #   should be moved to the container's cgroup
	I0719 19:05:34.454122   51196 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0719 19:05:34.454134   51196 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0719 19:05:34.454145   51196 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0719 19:05:34.454157   51196 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0719 19:05:34.454167   51196 command_runner.go:130] > #
	I0719 19:05:34.454175   51196 command_runner.go:130] > # Using the seccomp notifier feature:
	I0719 19:05:34.454185   51196 command_runner.go:130] > #
	I0719 19:05:34.454196   51196 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0719 19:05:34.454210   51196 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0719 19:05:34.454219   51196 command_runner.go:130] > #
	I0719 19:05:34.454230   51196 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0719 19:05:34.454243   51196 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0719 19:05:34.454251   51196 command_runner.go:130] > #
	I0719 19:05:34.454265   51196 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0719 19:05:34.454276   51196 command_runner.go:130] > # feature.
	I0719 19:05:34.454282   51196 command_runner.go:130] > #
	I0719 19:05:34.454295   51196 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0719 19:05:34.454310   51196 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0719 19:05:34.454323   51196 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0719 19:05:34.454335   51196 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0719 19:05:34.454349   51196 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0719 19:05:34.454359   51196 command_runner.go:130] > #
	I0719 19:05:34.454370   51196 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0719 19:05:34.454386   51196 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0719 19:05:34.454395   51196 command_runner.go:130] > #
	I0719 19:05:34.454406   51196 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0719 19:05:34.454420   51196 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0719 19:05:34.454430   51196 command_runner.go:130] > #
	I0719 19:05:34.454440   51196 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0719 19:05:34.454454   51196 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0719 19:05:34.454465   51196 command_runner.go:130] > # limitation.
	I0719 19:05:34.454473   51196 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0719 19:05:34.454485   51196 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0719 19:05:34.454496   51196 command_runner.go:130] > runtime_type = "oci"
	I0719 19:05:34.454507   51196 command_runner.go:130] > runtime_root = "/run/runc"
	I0719 19:05:34.454531   51196 command_runner.go:130] > runtime_config_path = ""
	I0719 19:05:34.454537   51196 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0719 19:05:34.454548   51196 command_runner.go:130] > monitor_cgroup = "pod"
	I0719 19:05:34.454561   51196 command_runner.go:130] > monitor_exec_cgroup = ""
	I0719 19:05:34.454572   51196 command_runner.go:130] > monitor_env = [
	I0719 19:05:34.454582   51196 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0719 19:05:34.454591   51196 command_runner.go:130] > ]
	I0719 19:05:34.454604   51196 command_runner.go:130] > privileged_without_host_devices = false
	I0719 19:05:34.454619   51196 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0719 19:05:34.454631   51196 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0719 19:05:34.454644   51196 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0719 19:05:34.454657   51196 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0719 19:05:34.454674   51196 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0719 19:05:34.454688   51196 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0719 19:05:34.454706   51196 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0719 19:05:34.454723   51196 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0719 19:05:34.454733   51196 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0719 19:05:34.454748   51196 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0719 19:05:34.454759   51196 command_runner.go:130] > # Example:
	I0719 19:05:34.454768   51196 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0719 19:05:34.454776   51196 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0719 19:05:34.454784   51196 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0719 19:05:34.454793   51196 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0719 19:05:34.454800   51196 command_runner.go:130] > # cpuset = 0
	I0719 19:05:34.454807   51196 command_runner.go:130] > # cpushares = "0-1"
	I0719 19:05:34.454813   51196 command_runner.go:130] > # Where:
	I0719 19:05:34.454828   51196 command_runner.go:130] > # The workload name is workload-type.
	I0719 19:05:34.454836   51196 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0719 19:05:34.454844   51196 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0719 19:05:34.454853   51196 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0719 19:05:34.454867   51196 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0719 19:05:34.454876   51196 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0719 19:05:34.454887   51196 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0719 19:05:34.454899   51196 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0719 19:05:34.454907   51196 command_runner.go:130] > # Default value is set to true
	I0719 19:05:34.454915   51196 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0719 19:05:34.454922   51196 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0719 19:05:34.454927   51196 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0719 19:05:34.454935   51196 command_runner.go:130] > # Default value is set to 'false'
	I0719 19:05:34.454957   51196 command_runner.go:130] > # disable_hostport_mapping = false
	I0719 19:05:34.454972   51196 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0719 19:05:34.454982   51196 command_runner.go:130] > #
	I0719 19:05:34.455068   51196 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0719 19:05:34.455088   51196 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0719 19:05:34.455103   51196 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0719 19:05:34.455116   51196 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0719 19:05:34.455128   51196 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0719 19:05:34.455136   51196 command_runner.go:130] > [crio.image]
	I0719 19:05:34.455146   51196 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0719 19:05:34.455156   51196 command_runner.go:130] > # default_transport = "docker://"
	I0719 19:05:34.455165   51196 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0719 19:05:34.455192   51196 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0719 19:05:34.455203   51196 command_runner.go:130] > # global_auth_file = ""
	I0719 19:05:34.455214   51196 command_runner.go:130] > # The image used to instantiate infra containers.
	I0719 19:05:34.455225   51196 command_runner.go:130] > # This option supports live configuration reload.
	I0719 19:05:34.455235   51196 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0719 19:05:34.455249   51196 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0719 19:05:34.455262   51196 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0719 19:05:34.455272   51196 command_runner.go:130] > # This option supports live configuration reload.
	I0719 19:05:34.455279   51196 command_runner.go:130] > # pause_image_auth_file = ""
	I0719 19:05:34.455287   51196 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0719 19:05:34.455300   51196 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0719 19:05:34.455319   51196 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0719 19:05:34.455337   51196 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0719 19:05:34.455347   51196 command_runner.go:130] > # pause_command = "/pause"
	I0719 19:05:34.455358   51196 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0719 19:05:34.455367   51196 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0719 19:05:34.455376   51196 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0719 19:05:34.455392   51196 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0719 19:05:34.455405   51196 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0719 19:05:34.455418   51196 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0719 19:05:34.455427   51196 command_runner.go:130] > # pinned_images = [
	I0719 19:05:34.455435   51196 command_runner.go:130] > # ]
	I0719 19:05:34.455447   51196 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0719 19:05:34.455456   51196 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0719 19:05:34.455468   51196 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0719 19:05:34.455481   51196 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0719 19:05:34.455492   51196 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0719 19:05:34.455502   51196 command_runner.go:130] > # signature_policy = ""
	I0719 19:05:34.455514   51196 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0719 19:05:34.455528   51196 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0719 19:05:34.455541   51196 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0719 19:05:34.455549   51196 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0719 19:05:34.455558   51196 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0719 19:05:34.455568   51196 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0719 19:05:34.455581   51196 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0719 19:05:34.455593   51196 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0719 19:05:34.455602   51196 command_runner.go:130] > # changing them here.
	I0719 19:05:34.455612   51196 command_runner.go:130] > # insecure_registries = [
	I0719 19:05:34.455618   51196 command_runner.go:130] > # ]
	I0719 19:05:34.455629   51196 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0719 19:05:34.455636   51196 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0719 19:05:34.455642   51196 command_runner.go:130] > # image_volumes = "mkdir"
	I0719 19:05:34.455653   51196 command_runner.go:130] > # Temporary directory to use for storing big files
	I0719 19:05:34.455663   51196 command_runner.go:130] > # big_files_temporary_dir = ""
	I0719 19:05:34.455676   51196 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0719 19:05:34.455685   51196 command_runner.go:130] > # CNI plugins.
	I0719 19:05:34.455694   51196 command_runner.go:130] > [crio.network]
	I0719 19:05:34.455706   51196 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0719 19:05:34.455718   51196 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0719 19:05:34.455725   51196 command_runner.go:130] > # cni_default_network = ""
	I0719 19:05:34.455735   51196 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0719 19:05:34.455745   51196 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0719 19:05:34.455754   51196 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0719 19:05:34.455765   51196 command_runner.go:130] > # plugin_dirs = [
	I0719 19:05:34.455773   51196 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0719 19:05:34.455781   51196 command_runner.go:130] > # ]
	I0719 19:05:34.455791   51196 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0719 19:05:34.455799   51196 command_runner.go:130] > [crio.metrics]
	I0719 19:05:34.455808   51196 command_runner.go:130] > # Globally enable or disable metrics support.
	I0719 19:05:34.455815   51196 command_runner.go:130] > enable_metrics = true
	I0719 19:05:34.455820   51196 command_runner.go:130] > # Specify enabled metrics collectors.
	I0719 19:05:34.455829   51196 command_runner.go:130] > # Per default all metrics are enabled.
	I0719 19:05:34.455852   51196 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0719 19:05:34.455863   51196 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0719 19:05:34.455875   51196 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0719 19:05:34.455885   51196 command_runner.go:130] > # metrics_collectors = [
	I0719 19:05:34.455893   51196 command_runner.go:130] > # 	"operations",
	I0719 19:05:34.455904   51196 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0719 19:05:34.455913   51196 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0719 19:05:34.455921   51196 command_runner.go:130] > # 	"operations_errors",
	I0719 19:05:34.455925   51196 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0719 19:05:34.455933   51196 command_runner.go:130] > # 	"image_pulls_by_name",
	I0719 19:05:34.455943   51196 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0719 19:05:34.455954   51196 command_runner.go:130] > # 	"image_pulls_failures",
	I0719 19:05:34.455964   51196 command_runner.go:130] > # 	"image_pulls_successes",
	I0719 19:05:34.455974   51196 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0719 19:05:34.455983   51196 command_runner.go:130] > # 	"image_layer_reuse",
	I0719 19:05:34.455993   51196 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0719 19:05:34.456002   51196 command_runner.go:130] > # 	"containers_oom_total",
	I0719 19:05:34.456010   51196 command_runner.go:130] > # 	"containers_oom",
	I0719 19:05:34.456019   51196 command_runner.go:130] > # 	"processes_defunct",
	I0719 19:05:34.456029   51196 command_runner.go:130] > # 	"operations_total",
	I0719 19:05:34.456039   51196 command_runner.go:130] > # 	"operations_latency_seconds",
	I0719 19:05:34.456050   51196 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0719 19:05:34.456060   51196 command_runner.go:130] > # 	"operations_errors_total",
	I0719 19:05:34.456069   51196 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0719 19:05:34.456079   51196 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0719 19:05:34.456088   51196 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0719 19:05:34.456095   51196 command_runner.go:130] > # 	"image_pulls_success_total",
	I0719 19:05:34.456103   51196 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0719 19:05:34.456112   51196 command_runner.go:130] > # 	"containers_oom_count_total",
	I0719 19:05:34.456124   51196 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0719 19:05:34.456133   51196 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0719 19:05:34.456145   51196 command_runner.go:130] > # ]
	I0719 19:05:34.456156   51196 command_runner.go:130] > # The port on which the metrics server will listen.
	I0719 19:05:34.456165   51196 command_runner.go:130] > # metrics_port = 9090
	I0719 19:05:34.456174   51196 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0719 19:05:34.456180   51196 command_runner.go:130] > # metrics_socket = ""
	I0719 19:05:34.456187   51196 command_runner.go:130] > # The certificate for the secure metrics server.
	I0719 19:05:34.456200   51196 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0719 19:05:34.456213   51196 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0719 19:05:34.456223   51196 command_runner.go:130] > # certificate on any modification event.
	I0719 19:05:34.456233   51196 command_runner.go:130] > # metrics_cert = ""
	I0719 19:05:34.456244   51196 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0719 19:05:34.456254   51196 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0719 19:05:34.456264   51196 command_runner.go:130] > # metrics_key = ""
	I0719 19:05:34.456275   51196 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0719 19:05:34.456281   51196 command_runner.go:130] > [crio.tracing]
	I0719 19:05:34.456288   51196 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0719 19:05:34.456297   51196 command_runner.go:130] > # enable_tracing = false
	I0719 19:05:34.456309   51196 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0719 19:05:34.456319   51196 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0719 19:05:34.456350   51196 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0719 19:05:34.456360   51196 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0719 19:05:34.456369   51196 command_runner.go:130] > # CRI-O NRI configuration.
	I0719 19:05:34.456376   51196 command_runner.go:130] > [crio.nri]
	I0719 19:05:34.456387   51196 command_runner.go:130] > # Globally enable or disable NRI.
	I0719 19:05:34.456396   51196 command_runner.go:130] > # enable_nri = false
	I0719 19:05:34.456406   51196 command_runner.go:130] > # NRI socket to listen on.
	I0719 19:05:34.456414   51196 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0719 19:05:34.456424   51196 command_runner.go:130] > # NRI plugin directory to use.
	I0719 19:05:34.456434   51196 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0719 19:05:34.456445   51196 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0719 19:05:34.456456   51196 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0719 19:05:34.456467   51196 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0719 19:05:34.456475   51196 command_runner.go:130] > # nri_disable_connections = false
	I0719 19:05:34.456483   51196 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0719 19:05:34.456493   51196 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0719 19:05:34.456504   51196 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0719 19:05:34.456515   51196 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0719 19:05:34.456527   51196 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0719 19:05:34.456536   51196 command_runner.go:130] > [crio.stats]
	I0719 19:05:34.456549   51196 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0719 19:05:34.456561   51196 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0719 19:05:34.456569   51196 command_runner.go:130] > # stats_collection_period = 0
	I0719 19:05:34.456611   51196 command_runner.go:130] ! time="2024-07-19 19:05:34.411390708Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0719 19:05:34.456637   51196 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0719 19:05:34.456784   51196 cni.go:84] Creating CNI manager for ""
	I0719 19:05:34.456796   51196 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0719 19:05:34.456808   51196 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 19:05:34.456842   51196 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.154 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-269079 NodeName:multinode-269079 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.154"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.154 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 19:05:34.456994   51196 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.154
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-269079"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.154
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.154"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 19:05:34.457071   51196 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 19:05:34.466414   51196 command_runner.go:130] > kubeadm
	I0719 19:05:34.466433   51196 command_runner.go:130] > kubectl
	I0719 19:05:34.466437   51196 command_runner.go:130] > kubelet
	I0719 19:05:34.466459   51196 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 19:05:34.466513   51196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 19:05:34.474978   51196 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0719 19:05:34.490124   51196 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 19:05:34.504775   51196 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0719 19:05:34.519530   51196 ssh_runner.go:195] Run: grep 192.168.39.154	control-plane.minikube.internal$ /etc/hosts
	I0719 19:05:34.522947   51196 command_runner.go:130] > 192.168.39.154	control-plane.minikube.internal
	I0719 19:05:34.523126   51196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:05:34.656921   51196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:05:34.670672   51196 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/multinode-269079 for IP: 192.168.39.154
	I0719 19:05:34.670699   51196 certs.go:194] generating shared ca certs ...
	I0719 19:05:34.670722   51196 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:05:34.670887   51196 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 19:05:34.670940   51196 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 19:05:34.670952   51196 certs.go:256] generating profile certs ...
	I0719 19:05:34.671028   51196 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/multinode-269079/client.key
	I0719 19:05:34.671083   51196 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/multinode-269079/apiserver.key.7e64e0f9
	I0719 19:05:34.671123   51196 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/multinode-269079/proxy-client.key
	I0719 19:05:34.671135   51196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 19:05:34.671148   51196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0719 19:05:34.671164   51196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 19:05:34.671180   51196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 19:05:34.671195   51196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/multinode-269079/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 19:05:34.671213   51196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/multinode-269079/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 19:05:34.671228   51196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/multinode-269079/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 19:05:34.671246   51196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/multinode-269079/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 19:05:34.671317   51196 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 19:05:34.671348   51196 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 19:05:34.671360   51196 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 19:05:34.671386   51196 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 19:05:34.671411   51196 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 19:05:34.671434   51196 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 19:05:34.671477   51196 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:05:34.671506   51196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> /usr/share/ca-certificates/220282.pem
	I0719 19:05:34.671521   51196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:05:34.671535   51196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem -> /usr/share/ca-certificates/22028.pem
	I0719 19:05:34.672118   51196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 19:05:34.695433   51196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 19:05:34.717165   51196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 19:05:34.738855   51196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 19:05:34.760014   51196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/multinode-269079/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0719 19:05:34.780762   51196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/multinode-269079/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 19:05:34.802183   51196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/multinode-269079/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 19:05:34.824134   51196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/multinode-269079/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 19:05:34.845819   51196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 19:05:34.867630   51196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 19:05:34.889587   51196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 19:05:34.910521   51196 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 19:05:34.925856   51196 ssh_runner.go:195] Run: openssl version
	I0719 19:05:34.931364   51196 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0719 19:05:34.931422   51196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 19:05:34.941168   51196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 19:05:34.945216   51196 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 19:05:34.945240   51196 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 19:05:34.945277   51196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 19:05:34.950366   51196 command_runner.go:130] > 3ec20f2e
	I0719 19:05:34.950547   51196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 19:05:34.958942   51196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 19:05:34.968449   51196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:05:34.972339   51196 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:05:34.972477   51196 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:05:34.972527   51196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:05:34.977532   51196 command_runner.go:130] > b5213941
	I0719 19:05:34.977575   51196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 19:05:34.986187   51196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 19:05:34.995695   51196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 19:05:34.999579   51196 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 19:05:34.999598   51196 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 19:05:34.999628   51196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 19:05:35.004815   51196 command_runner.go:130] > 51391683
	I0719 19:05:35.004878   51196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 19:05:35.013185   51196 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 19:05:35.017242   51196 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 19:05:35.017266   51196 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0719 19:05:35.017275   51196 command_runner.go:130] > Device: 253,1	Inode: 9433131     Links: 1
	I0719 19:05:35.017286   51196 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0719 19:05:35.017295   51196 command_runner.go:130] > Access: 2024-07-19 18:58:43.710879364 +0000
	I0719 19:05:35.017306   51196 command_runner.go:130] > Modify: 2024-07-19 18:58:43.710879364 +0000
	I0719 19:05:35.017315   51196 command_runner.go:130] > Change: 2024-07-19 18:58:43.710879364 +0000
	I0719 19:05:35.017325   51196 command_runner.go:130] >  Birth: 2024-07-19 18:58:43.710879364 +0000
	I0719 19:05:35.017376   51196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 19:05:35.022462   51196 command_runner.go:130] > Certificate will not expire
	I0719 19:05:35.022506   51196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 19:05:35.037015   51196 command_runner.go:130] > Certificate will not expire
	I0719 19:05:35.037481   51196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 19:05:35.050872   51196 command_runner.go:130] > Certificate will not expire
	I0719 19:05:35.051242   51196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 19:05:35.056644   51196 command_runner.go:130] > Certificate will not expire
	I0719 19:05:35.056705   51196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 19:05:35.062077   51196 command_runner.go:130] > Certificate will not expire
	I0719 19:05:35.062214   51196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 19:05:35.067856   51196 command_runner.go:130] > Certificate will not expire
	I0719 19:05:35.067944   51196 kubeadm.go:392] StartCluster: {Name:multinode-269079 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-269079 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.165 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:05:35.068080   51196 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 19:05:35.068133   51196 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:05:35.103443   51196 command_runner.go:130] > a8482fd072f2f3c18b107fac23e85f1f42e11a3d9ea24051b7ea40e5174964c6
	I0719 19:05:35.103469   51196 command_runner.go:130] > 49a43bbb0873d11feeb5dc4abc6e9eb8eebbab12d60c776484c97aee8ad8c955
	I0719 19:05:35.103476   51196 command_runner.go:130] > e75ab08afcd7afa95ec628bc9bf04c07e01d635090c69e2bb19057e80bd5ff66
	I0719 19:05:35.103485   51196 command_runner.go:130] > e71f310f55a559649349c09e79956008833bf0af02c07e8a146bb6869cef652e
	I0719 19:05:35.103494   51196 command_runner.go:130] > 68f992a866a01221fd939d28856ca7ec579717b7fe20a4c90e583139e69f1c48
	I0719 19:05:35.103501   51196 command_runner.go:130] > 64edcccff1ef0315c913f1b9f8faf0f09b13456c8de36295b52578d5fde4ac43
	I0719 19:05:35.103510   51196 command_runner.go:130] > 321b6c480a3713de53fd2e1bc97ebd21ede0deb84be28bc71b31c95c9496e5a0
	I0719 19:05:35.103522   51196 command_runner.go:130] > d45496ab592ce2cd7517e1885f2249bea0a1add7d091cdd9e5519ce84c40c831
	I0719 19:05:35.103549   51196 cri.go:89] found id: "a8482fd072f2f3c18b107fac23e85f1f42e11a3d9ea24051b7ea40e5174964c6"
	I0719 19:05:35.103560   51196 cri.go:89] found id: "49a43bbb0873d11feeb5dc4abc6e9eb8eebbab12d60c776484c97aee8ad8c955"
	I0719 19:05:35.103565   51196 cri.go:89] found id: "e75ab08afcd7afa95ec628bc9bf04c07e01d635090c69e2bb19057e80bd5ff66"
	I0719 19:05:35.103572   51196 cri.go:89] found id: "e71f310f55a559649349c09e79956008833bf0af02c07e8a146bb6869cef652e"
	I0719 19:05:35.103576   51196 cri.go:89] found id: "68f992a866a01221fd939d28856ca7ec579717b7fe20a4c90e583139e69f1c48"
	I0719 19:05:35.103580   51196 cri.go:89] found id: "64edcccff1ef0315c913f1b9f8faf0f09b13456c8de36295b52578d5fde4ac43"
	I0719 19:05:35.103584   51196 cri.go:89] found id: "321b6c480a3713de53fd2e1bc97ebd21ede0deb84be28bc71b31c95c9496e5a0"
	I0719 19:05:35.103591   51196 cri.go:89] found id: "d45496ab592ce2cd7517e1885f2249bea0a1add7d091cdd9e5519ce84c40c831"
	I0719 19:05:35.103595   51196 cri.go:89] found id: ""
	I0719 19:05:35.103647   51196 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 19 19:09:45 multinode-269079 crio[2873]: time="2024-07-19 19:09:45.033910101Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721416185033880162,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=46f4d4d5-3ad4-4c7e-a98a-38c73f4216ba name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:09:45 multinode-269079 crio[2873]: time="2024-07-19 19:09:45.034519970Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=436fb35e-dabd-4628-bddc-412380683080 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:09:45 multinode-269079 crio[2873]: time="2024-07-19 19:09:45.034586664Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=436fb35e-dabd-4628-bddc-412380683080 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:09:45 multinode-269079 crio[2873]: time="2024-07-19 19:09:45.034945755Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:894bd871a6c3fea5bb7630c7a3b093b521388e780b8f3a2bba951a32aba8f85b,PodSandboxId:9d8f06d1f03f5a0f5b3531cfc0a486eacb620b5f2947dbf822900336c4d8e259,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721415975098398597,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6fcln,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f6cd030c-4433-4384-afb6-224e9298f440,},Annotations:map[string]string{io.kubernetes.container.hash: cc020fff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46fc9907bacdb8a8cfa273acb3043b189442ae850273319b4ace7655cbcde8ad,PodSandboxId:c571d80547cde7518ba0b559c4e91330636429a6fc0e1b042bc6c981a915dcbb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721415941585538619,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k2tw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03ec807-2f25-4220-aade-72b3b186cbff,},Annotations:map[string]string{io.kubernetes.container.hash: e2fb1063,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e533ceba794b58bc3b02f0a58da4e04f4ce86956f27cdae2e26db00006a10614,PodSandboxId:1601505f0629c7483a924f99b2e108d25deec7589b71610152dc0039f6eb9659,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721415941386512179,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xccsr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe26d277-d2d1-4f90-b959-f5c5cab1e3b4,},Annotations:map[string]string{io.kubernetes.container.hash: 96c51d41,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9c6c04f5ffc6f44009fbe8f9b51c78a11ff11509217a279a52515717be2092,PodSandboxId:62d16e85274e126e42ae27cbbfd24257bcc929dcdc4c5b92954c398c62b44411,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721415941374820723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bcd5d04-5e58-4c45-b920-3e5c914cb20a,},An
notations:map[string]string{io.kubernetes.container.hash: e1cf223e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5f5e7486a1a2d48f6df68e16b9fb442d5f76c20906fbef9f20a2126380085e0,PodSandboxId:8bf0f78fac48b4396a533326e9a24e9c889e8a65867250c233eab8ecc4b7aa7a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721415941330344981,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sthg6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26619429-a87d-4b20-859c-ccefabcd5d0c,},Annotations:map[string]string{io.ku
bernetes.container.hash: 51979c36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06cf7dbdff6300c5e36fdabbbedbe294040c0b9f43818b301d3fb996c076e525,PodSandboxId:6db352d6aa0718baeeca18fc47bdf5cc124bc072dd6e602f808d7e8ef051878e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721415937553462604,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb76a4932ccceb91526779cec095f7c5,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d157f505891344f357ec23eb8fe312255a86a35c7b54c8ee387ccf0c9128b13,PodSandboxId:36e84396da070a006026129d11038693d523fbe32a892930c97e84402462fddb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721415937516605499,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85949570dee989a82fdcd94cbe491c22,},Annotations:map[string]string{io.kubernetes.container.hash: 9cb43d85,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94bbb6dcaa9e8481e24ce8e895f693b8120a5acc59ce7a192da7a5a743d080de,PodSandboxId:02664b942fba6a984f140de0b3e18326684e125bebfbd01bcb7fc7dd87b8b4d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721415937463895262,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39e57bc97afce52cf75404d8daa3ce28,},Annotations:map[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbead3e675296f48b31a6bbd35249117e32374c47b3f82855779e42234a4d62,PodSandboxId:b76dd8c5ae47ec5a1de1ba37ba5bab60b99ca95c57cde30f4ad5bd1b8786d173,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721415937453892928,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1f2459a592675837a715c978418f030,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d14d23b67d466906ae5bd294307038c92e6a674fb5f9b43793d59fc077dc281c,PodSandboxId:d1c000b9a2b625f9122b2809d835703e0199ed4ed7083527adb3708c8cb284d4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721415615609057770,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6fcln,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f6cd030c-4433-4384-afb6-224e9298f440,},Annotations:map[string]string{io.kubernetes.container.hash: cc020fff,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8482fd072f2f3c18b107fac23e85f1f42e11a3d9ea24051b7ea40e5174964c6,PodSandboxId:ffd82047fa6c05e1020361333fb66b324993289ead78cdd33c8591fa632fc334,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721415563967720756,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xccsr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe26d277-d2d1-4f90-b959-f5c5cab1e3b4,},Annotations:map[string]string{io.kubernetes.container.hash: 96c51d41,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49a43bbb0873d11feeb5dc4abc6e9eb8eebbab12d60c776484c97aee8ad8c955,PodSandboxId:74e7c4e8e9b2407ad33cbc60830967321f7b16a6970003397f38fd84d6897f5c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721415562394397035,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 0bcd5d04-5e58-4c45-b920-3e5c914cb20a,},Annotations:map[string]string{io.kubernetes.container.hash: e1cf223e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e75ab08afcd7afa95ec628bc9bf04c07e01d635090c69e2bb19057e80bd5ff66,PodSandboxId:1047c18ae3d8fd2df34386adf401628e88acc3c6cbd6ed5007f1a4d19540a20b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721415550592022676,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k2tw6,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: c03ec807-2f25-4220-aade-72b3b186cbff,},Annotations:map[string]string{io.kubernetes.container.hash: e2fb1063,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e71f310f55a559649349c09e79956008833bf0af02c07e8a146bb6869cef652e,PodSandboxId:f2e597e3254d2290e71c52298d51e2766620f18fde0eab0f31932b24f512db95,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721415547342755584,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sthg6,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 26619429-a87d-4b20-859c-ccefabcd5d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 51979c36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68f992a866a01221fd939d28856ca7ec579717b7fe20a4c90e583139e69f1c48,PodSandboxId:340f9bce574ea19f864a36e20ff422006bfd1deb41b8a84713f1df57167af056,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721415527382695535,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85949570dee989a82fdcd94cbe491c
22,},Annotations:map[string]string{io.kubernetes.container.hash: 9cb43d85,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64edcccff1ef0315c913f1b9f8faf0f09b13456c8de36295b52578d5fde4ac43,PodSandboxId:a13fee91701e93e490f3cc3d5d0860972fb91c4d6049309fc0866ce8b54c0db3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721415527368954252,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb76a4932ccceb91526779cec095f7c5,},Annotation
s:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321b6c480a3713de53fd2e1bc97ebd21ede0deb84be28bc71b31c95c9496e5a0,PodSandboxId:7a6e0f8671b91f2a441b89dfbd988618dfd444d45ec9b1232013550e9a3d5295,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721415527339802479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1f2459a592675837a715c978418f030,
},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d45496ab592ce2cd7517e1885f2249bea0a1add7d091cdd9e5519ce84c40c831,PodSandboxId:8d8e7cba76b9224d692f705da3e4ec80fe3f8771027d2425cfbebc9289031ff2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721415527324140974,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39e57bc97afce52cf75404d8daa3ce28,},Annotations:m
ap[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=436fb35e-dabd-4628-bddc-412380683080 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:09:45 multinode-269079 crio[2873]: time="2024-07-19 19:09:45.074301336Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=63810900-1c4e-43e5-90a4-2450f523b46d name=/runtime.v1.RuntimeService/Version
	Jul 19 19:09:45 multinode-269079 crio[2873]: time="2024-07-19 19:09:45.074414475Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=63810900-1c4e-43e5-90a4-2450f523b46d name=/runtime.v1.RuntimeService/Version
	Jul 19 19:09:45 multinode-269079 crio[2873]: time="2024-07-19 19:09:45.075912226Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ac56aa5b-de1e-4e51-b4d9-be284a521912 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:09:45 multinode-269079 crio[2873]: time="2024-07-19 19:09:45.076421894Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721416185076394629,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ac56aa5b-de1e-4e51-b4d9-be284a521912 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:09:45 multinode-269079 crio[2873]: time="2024-07-19 19:09:45.080413609Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cc9ecf95-dcc6-4c2e-aacb-cef96594f87b name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:09:45 multinode-269079 crio[2873]: time="2024-07-19 19:09:45.080467285Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cc9ecf95-dcc6-4c2e-aacb-cef96594f87b name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:09:45 multinode-269079 crio[2873]: time="2024-07-19 19:09:45.080985212Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:894bd871a6c3fea5bb7630c7a3b093b521388e780b8f3a2bba951a32aba8f85b,PodSandboxId:9d8f06d1f03f5a0f5b3531cfc0a486eacb620b5f2947dbf822900336c4d8e259,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721415975098398597,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6fcln,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f6cd030c-4433-4384-afb6-224e9298f440,},Annotations:map[string]string{io.kubernetes.container.hash: cc020fff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46fc9907bacdb8a8cfa273acb3043b189442ae850273319b4ace7655cbcde8ad,PodSandboxId:c571d80547cde7518ba0b559c4e91330636429a6fc0e1b042bc6c981a915dcbb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721415941585538619,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k2tw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03ec807-2f25-4220-aade-72b3b186cbff,},Annotations:map[string]string{io.kubernetes.container.hash: e2fb1063,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e533ceba794b58bc3b02f0a58da4e04f4ce86956f27cdae2e26db00006a10614,PodSandboxId:1601505f0629c7483a924f99b2e108d25deec7589b71610152dc0039f6eb9659,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721415941386512179,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xccsr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe26d277-d2d1-4f90-b959-f5c5cab1e3b4,},Annotations:map[string]string{io.kubernetes.container.hash: 96c51d41,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9c6c04f5ffc6f44009fbe8f9b51c78a11ff11509217a279a52515717be2092,PodSandboxId:62d16e85274e126e42ae27cbbfd24257bcc929dcdc4c5b92954c398c62b44411,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721415941374820723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bcd5d04-5e58-4c45-b920-3e5c914cb20a,},An
notations:map[string]string{io.kubernetes.container.hash: e1cf223e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5f5e7486a1a2d48f6df68e16b9fb442d5f76c20906fbef9f20a2126380085e0,PodSandboxId:8bf0f78fac48b4396a533326e9a24e9c889e8a65867250c233eab8ecc4b7aa7a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721415941330344981,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sthg6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26619429-a87d-4b20-859c-ccefabcd5d0c,},Annotations:map[string]string{io.ku
bernetes.container.hash: 51979c36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06cf7dbdff6300c5e36fdabbbedbe294040c0b9f43818b301d3fb996c076e525,PodSandboxId:6db352d6aa0718baeeca18fc47bdf5cc124bc072dd6e602f808d7e8ef051878e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721415937553462604,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb76a4932ccceb91526779cec095f7c5,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d157f505891344f357ec23eb8fe312255a86a35c7b54c8ee387ccf0c9128b13,PodSandboxId:36e84396da070a006026129d11038693d523fbe32a892930c97e84402462fddb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721415937516605499,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85949570dee989a82fdcd94cbe491c22,},Annotations:map[string]string{io.kubernetes.container.hash: 9cb43d85,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94bbb6dcaa9e8481e24ce8e895f693b8120a5acc59ce7a192da7a5a743d080de,PodSandboxId:02664b942fba6a984f140de0b3e18326684e125bebfbd01bcb7fc7dd87b8b4d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721415937463895262,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39e57bc97afce52cf75404d8daa3ce28,},Annotations:map[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbead3e675296f48b31a6bbd35249117e32374c47b3f82855779e42234a4d62,PodSandboxId:b76dd8c5ae47ec5a1de1ba37ba5bab60b99ca95c57cde30f4ad5bd1b8786d173,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721415937453892928,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1f2459a592675837a715c978418f030,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d14d23b67d466906ae5bd294307038c92e6a674fb5f9b43793d59fc077dc281c,PodSandboxId:d1c000b9a2b625f9122b2809d835703e0199ed4ed7083527adb3708c8cb284d4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721415615609057770,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6fcln,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f6cd030c-4433-4384-afb6-224e9298f440,},Annotations:map[string]string{io.kubernetes.container.hash: cc020fff,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8482fd072f2f3c18b107fac23e85f1f42e11a3d9ea24051b7ea40e5174964c6,PodSandboxId:ffd82047fa6c05e1020361333fb66b324993289ead78cdd33c8591fa632fc334,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721415563967720756,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xccsr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe26d277-d2d1-4f90-b959-f5c5cab1e3b4,},Annotations:map[string]string{io.kubernetes.container.hash: 96c51d41,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49a43bbb0873d11feeb5dc4abc6e9eb8eebbab12d60c776484c97aee8ad8c955,PodSandboxId:74e7c4e8e9b2407ad33cbc60830967321f7b16a6970003397f38fd84d6897f5c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721415562394397035,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 0bcd5d04-5e58-4c45-b920-3e5c914cb20a,},Annotations:map[string]string{io.kubernetes.container.hash: e1cf223e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e75ab08afcd7afa95ec628bc9bf04c07e01d635090c69e2bb19057e80bd5ff66,PodSandboxId:1047c18ae3d8fd2df34386adf401628e88acc3c6cbd6ed5007f1a4d19540a20b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721415550592022676,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k2tw6,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: c03ec807-2f25-4220-aade-72b3b186cbff,},Annotations:map[string]string{io.kubernetes.container.hash: e2fb1063,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e71f310f55a559649349c09e79956008833bf0af02c07e8a146bb6869cef652e,PodSandboxId:f2e597e3254d2290e71c52298d51e2766620f18fde0eab0f31932b24f512db95,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721415547342755584,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sthg6,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 26619429-a87d-4b20-859c-ccefabcd5d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 51979c36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68f992a866a01221fd939d28856ca7ec579717b7fe20a4c90e583139e69f1c48,PodSandboxId:340f9bce574ea19f864a36e20ff422006bfd1deb41b8a84713f1df57167af056,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721415527382695535,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85949570dee989a82fdcd94cbe491c
22,},Annotations:map[string]string{io.kubernetes.container.hash: 9cb43d85,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64edcccff1ef0315c913f1b9f8faf0f09b13456c8de36295b52578d5fde4ac43,PodSandboxId:a13fee91701e93e490f3cc3d5d0860972fb91c4d6049309fc0866ce8b54c0db3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721415527368954252,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb76a4932ccceb91526779cec095f7c5,},Annotation
s:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321b6c480a3713de53fd2e1bc97ebd21ede0deb84be28bc71b31c95c9496e5a0,PodSandboxId:7a6e0f8671b91f2a441b89dfbd988618dfd444d45ec9b1232013550e9a3d5295,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721415527339802479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1f2459a592675837a715c978418f030,
},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d45496ab592ce2cd7517e1885f2249bea0a1add7d091cdd9e5519ce84c40c831,PodSandboxId:8d8e7cba76b9224d692f705da3e4ec80fe3f8771027d2425cfbebc9289031ff2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721415527324140974,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39e57bc97afce52cf75404d8daa3ce28,},Annotations:m
ap[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cc9ecf95-dcc6-4c2e-aacb-cef96594f87b name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:09:45 multinode-269079 crio[2873]: time="2024-07-19 19:09:45.120740960Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bfd920fb-aaf7-4ef2-9f6e-ed904771c10d name=/runtime.v1.RuntimeService/Version
	Jul 19 19:09:45 multinode-269079 crio[2873]: time="2024-07-19 19:09:45.120822875Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bfd920fb-aaf7-4ef2-9f6e-ed904771c10d name=/runtime.v1.RuntimeService/Version
	Jul 19 19:09:45 multinode-269079 crio[2873]: time="2024-07-19 19:09:45.121939613Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1e70afef-7eed-4d8e-a87a-b0da060b8481 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:09:45 multinode-269079 crio[2873]: time="2024-07-19 19:09:45.122465586Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721416185122444018,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1e70afef-7eed-4d8e-a87a-b0da060b8481 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:09:45 multinode-269079 crio[2873]: time="2024-07-19 19:09:45.122983276Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dce40cc5-1d26-4687-863d-8eeac3090d63 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:09:45 multinode-269079 crio[2873]: time="2024-07-19 19:09:45.123037185Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dce40cc5-1d26-4687-863d-8eeac3090d63 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:09:45 multinode-269079 crio[2873]: time="2024-07-19 19:09:45.123444394Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:894bd871a6c3fea5bb7630c7a3b093b521388e780b8f3a2bba951a32aba8f85b,PodSandboxId:9d8f06d1f03f5a0f5b3531cfc0a486eacb620b5f2947dbf822900336c4d8e259,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721415975098398597,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6fcln,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f6cd030c-4433-4384-afb6-224e9298f440,},Annotations:map[string]string{io.kubernetes.container.hash: cc020fff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46fc9907bacdb8a8cfa273acb3043b189442ae850273319b4ace7655cbcde8ad,PodSandboxId:c571d80547cde7518ba0b559c4e91330636429a6fc0e1b042bc6c981a915dcbb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721415941585538619,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k2tw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03ec807-2f25-4220-aade-72b3b186cbff,},Annotations:map[string]string{io.kubernetes.container.hash: e2fb1063,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e533ceba794b58bc3b02f0a58da4e04f4ce86956f27cdae2e26db00006a10614,PodSandboxId:1601505f0629c7483a924f99b2e108d25deec7589b71610152dc0039f6eb9659,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721415941386512179,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xccsr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe26d277-d2d1-4f90-b959-f5c5cab1e3b4,},Annotations:map[string]string{io.kubernetes.container.hash: 96c51d41,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9c6c04f5ffc6f44009fbe8f9b51c78a11ff11509217a279a52515717be2092,PodSandboxId:62d16e85274e126e42ae27cbbfd24257bcc929dcdc4c5b92954c398c62b44411,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721415941374820723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bcd5d04-5e58-4c45-b920-3e5c914cb20a,},An
notations:map[string]string{io.kubernetes.container.hash: e1cf223e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5f5e7486a1a2d48f6df68e16b9fb442d5f76c20906fbef9f20a2126380085e0,PodSandboxId:8bf0f78fac48b4396a533326e9a24e9c889e8a65867250c233eab8ecc4b7aa7a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721415941330344981,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sthg6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26619429-a87d-4b20-859c-ccefabcd5d0c,},Annotations:map[string]string{io.ku
bernetes.container.hash: 51979c36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06cf7dbdff6300c5e36fdabbbedbe294040c0b9f43818b301d3fb996c076e525,PodSandboxId:6db352d6aa0718baeeca18fc47bdf5cc124bc072dd6e602f808d7e8ef051878e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721415937553462604,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb76a4932ccceb91526779cec095f7c5,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d157f505891344f357ec23eb8fe312255a86a35c7b54c8ee387ccf0c9128b13,PodSandboxId:36e84396da070a006026129d11038693d523fbe32a892930c97e84402462fddb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721415937516605499,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85949570dee989a82fdcd94cbe491c22,},Annotations:map[string]string{io.kubernetes.container.hash: 9cb43d85,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94bbb6dcaa9e8481e24ce8e895f693b8120a5acc59ce7a192da7a5a743d080de,PodSandboxId:02664b942fba6a984f140de0b3e18326684e125bebfbd01bcb7fc7dd87b8b4d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721415937463895262,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39e57bc97afce52cf75404d8daa3ce28,},Annotations:map[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbead3e675296f48b31a6bbd35249117e32374c47b3f82855779e42234a4d62,PodSandboxId:b76dd8c5ae47ec5a1de1ba37ba5bab60b99ca95c57cde30f4ad5bd1b8786d173,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721415937453892928,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1f2459a592675837a715c978418f030,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d14d23b67d466906ae5bd294307038c92e6a674fb5f9b43793d59fc077dc281c,PodSandboxId:d1c000b9a2b625f9122b2809d835703e0199ed4ed7083527adb3708c8cb284d4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721415615609057770,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6fcln,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f6cd030c-4433-4384-afb6-224e9298f440,},Annotations:map[string]string{io.kubernetes.container.hash: cc020fff,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8482fd072f2f3c18b107fac23e85f1f42e11a3d9ea24051b7ea40e5174964c6,PodSandboxId:ffd82047fa6c05e1020361333fb66b324993289ead78cdd33c8591fa632fc334,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721415563967720756,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xccsr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe26d277-d2d1-4f90-b959-f5c5cab1e3b4,},Annotations:map[string]string{io.kubernetes.container.hash: 96c51d41,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49a43bbb0873d11feeb5dc4abc6e9eb8eebbab12d60c776484c97aee8ad8c955,PodSandboxId:74e7c4e8e9b2407ad33cbc60830967321f7b16a6970003397f38fd84d6897f5c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721415562394397035,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 0bcd5d04-5e58-4c45-b920-3e5c914cb20a,},Annotations:map[string]string{io.kubernetes.container.hash: e1cf223e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e75ab08afcd7afa95ec628bc9bf04c07e01d635090c69e2bb19057e80bd5ff66,PodSandboxId:1047c18ae3d8fd2df34386adf401628e88acc3c6cbd6ed5007f1a4d19540a20b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721415550592022676,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k2tw6,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: c03ec807-2f25-4220-aade-72b3b186cbff,},Annotations:map[string]string{io.kubernetes.container.hash: e2fb1063,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e71f310f55a559649349c09e79956008833bf0af02c07e8a146bb6869cef652e,PodSandboxId:f2e597e3254d2290e71c52298d51e2766620f18fde0eab0f31932b24f512db95,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721415547342755584,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sthg6,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 26619429-a87d-4b20-859c-ccefabcd5d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 51979c36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68f992a866a01221fd939d28856ca7ec579717b7fe20a4c90e583139e69f1c48,PodSandboxId:340f9bce574ea19f864a36e20ff422006bfd1deb41b8a84713f1df57167af056,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721415527382695535,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85949570dee989a82fdcd94cbe491c
22,},Annotations:map[string]string{io.kubernetes.container.hash: 9cb43d85,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64edcccff1ef0315c913f1b9f8faf0f09b13456c8de36295b52578d5fde4ac43,PodSandboxId:a13fee91701e93e490f3cc3d5d0860972fb91c4d6049309fc0866ce8b54c0db3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721415527368954252,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb76a4932ccceb91526779cec095f7c5,},Annotation
s:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321b6c480a3713de53fd2e1bc97ebd21ede0deb84be28bc71b31c95c9496e5a0,PodSandboxId:7a6e0f8671b91f2a441b89dfbd988618dfd444d45ec9b1232013550e9a3d5295,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721415527339802479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1f2459a592675837a715c978418f030,
},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d45496ab592ce2cd7517e1885f2249bea0a1add7d091cdd9e5519ce84c40c831,PodSandboxId:8d8e7cba76b9224d692f705da3e4ec80fe3f8771027d2425cfbebc9289031ff2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721415527324140974,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39e57bc97afce52cf75404d8daa3ce28,},Annotations:m
ap[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dce40cc5-1d26-4687-863d-8eeac3090d63 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:09:45 multinode-269079 crio[2873]: time="2024-07-19 19:09:45.165916774Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=039c2ffe-2fb1-4b11-87aa-e22b8c29d97d name=/runtime.v1.RuntimeService/Version
	Jul 19 19:09:45 multinode-269079 crio[2873]: time="2024-07-19 19:09:45.165992253Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=039c2ffe-2fb1-4b11-87aa-e22b8c29d97d name=/runtime.v1.RuntimeService/Version
	Jul 19 19:09:45 multinode-269079 crio[2873]: time="2024-07-19 19:09:45.167344082Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a1ff6d66-e38d-455f-bca9-63437955fe31 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:09:45 multinode-269079 crio[2873]: time="2024-07-19 19:09:45.167887356Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721416185167862660,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a1ff6d66-e38d-455f-bca9-63437955fe31 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:09:45 multinode-269079 crio[2873]: time="2024-07-19 19:09:45.168411571Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f1dfc1dc-a755-4e33-9a96-51a416559de7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:09:45 multinode-269079 crio[2873]: time="2024-07-19 19:09:45.168475661Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f1dfc1dc-a755-4e33-9a96-51a416559de7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:09:45 multinode-269079 crio[2873]: time="2024-07-19 19:09:45.168824782Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:894bd871a6c3fea5bb7630c7a3b093b521388e780b8f3a2bba951a32aba8f85b,PodSandboxId:9d8f06d1f03f5a0f5b3531cfc0a486eacb620b5f2947dbf822900336c4d8e259,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721415975098398597,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6fcln,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f6cd030c-4433-4384-afb6-224e9298f440,},Annotations:map[string]string{io.kubernetes.container.hash: cc020fff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46fc9907bacdb8a8cfa273acb3043b189442ae850273319b4ace7655cbcde8ad,PodSandboxId:c571d80547cde7518ba0b559c4e91330636429a6fc0e1b042bc6c981a915dcbb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721415941585538619,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k2tw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03ec807-2f25-4220-aade-72b3b186cbff,},Annotations:map[string]string{io.kubernetes.container.hash: e2fb1063,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e533ceba794b58bc3b02f0a58da4e04f4ce86956f27cdae2e26db00006a10614,PodSandboxId:1601505f0629c7483a924f99b2e108d25deec7589b71610152dc0039f6eb9659,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721415941386512179,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xccsr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe26d277-d2d1-4f90-b959-f5c5cab1e3b4,},Annotations:map[string]string{io.kubernetes.container.hash: 96c51d41,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9c6c04f5ffc6f44009fbe8f9b51c78a11ff11509217a279a52515717be2092,PodSandboxId:62d16e85274e126e42ae27cbbfd24257bcc929dcdc4c5b92954c398c62b44411,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721415941374820723,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bcd5d04-5e58-4c45-b920-3e5c914cb20a,},An
notations:map[string]string{io.kubernetes.container.hash: e1cf223e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5f5e7486a1a2d48f6df68e16b9fb442d5f76c20906fbef9f20a2126380085e0,PodSandboxId:8bf0f78fac48b4396a533326e9a24e9c889e8a65867250c233eab8ecc4b7aa7a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721415941330344981,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sthg6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26619429-a87d-4b20-859c-ccefabcd5d0c,},Annotations:map[string]string{io.ku
bernetes.container.hash: 51979c36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06cf7dbdff6300c5e36fdabbbedbe294040c0b9f43818b301d3fb996c076e525,PodSandboxId:6db352d6aa0718baeeca18fc47bdf5cc124bc072dd6e602f808d7e8ef051878e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721415937553462604,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb76a4932ccceb91526779cec095f7c5,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d157f505891344f357ec23eb8fe312255a86a35c7b54c8ee387ccf0c9128b13,PodSandboxId:36e84396da070a006026129d11038693d523fbe32a892930c97e84402462fddb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721415937516605499,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85949570dee989a82fdcd94cbe491c22,},Annotations:map[string]string{io.kubernetes.container.hash: 9cb43d85,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94bbb6dcaa9e8481e24ce8e895f693b8120a5acc59ce7a192da7a5a743d080de,PodSandboxId:02664b942fba6a984f140de0b3e18326684e125bebfbd01bcb7fc7dd87b8b4d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721415937463895262,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39e57bc97afce52cf75404d8daa3ce28,},Annotations:map[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbead3e675296f48b31a6bbd35249117e32374c47b3f82855779e42234a4d62,PodSandboxId:b76dd8c5ae47ec5a1de1ba37ba5bab60b99ca95c57cde30f4ad5bd1b8786d173,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721415937453892928,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1f2459a592675837a715c978418f030,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d14d23b67d466906ae5bd294307038c92e6a674fb5f9b43793d59fc077dc281c,PodSandboxId:d1c000b9a2b625f9122b2809d835703e0199ed4ed7083527adb3708c8cb284d4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721415615609057770,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6fcln,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f6cd030c-4433-4384-afb6-224e9298f440,},Annotations:map[string]string{io.kubernetes.container.hash: cc020fff,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8482fd072f2f3c18b107fac23e85f1f42e11a3d9ea24051b7ea40e5174964c6,PodSandboxId:ffd82047fa6c05e1020361333fb66b324993289ead78cdd33c8591fa632fc334,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721415563967720756,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xccsr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe26d277-d2d1-4f90-b959-f5c5cab1e3b4,},Annotations:map[string]string{io.kubernetes.container.hash: 96c51d41,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49a43bbb0873d11feeb5dc4abc6e9eb8eebbab12d60c776484c97aee8ad8c955,PodSandboxId:74e7c4e8e9b2407ad33cbc60830967321f7b16a6970003397f38fd84d6897f5c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721415562394397035,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 0bcd5d04-5e58-4c45-b920-3e5c914cb20a,},Annotations:map[string]string{io.kubernetes.container.hash: e1cf223e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e75ab08afcd7afa95ec628bc9bf04c07e01d635090c69e2bb19057e80bd5ff66,PodSandboxId:1047c18ae3d8fd2df34386adf401628e88acc3c6cbd6ed5007f1a4d19540a20b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721415550592022676,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k2tw6,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: c03ec807-2f25-4220-aade-72b3b186cbff,},Annotations:map[string]string{io.kubernetes.container.hash: e2fb1063,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e71f310f55a559649349c09e79956008833bf0af02c07e8a146bb6869cef652e,PodSandboxId:f2e597e3254d2290e71c52298d51e2766620f18fde0eab0f31932b24f512db95,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721415547342755584,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sthg6,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 26619429-a87d-4b20-859c-ccefabcd5d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 51979c36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68f992a866a01221fd939d28856ca7ec579717b7fe20a4c90e583139e69f1c48,PodSandboxId:340f9bce574ea19f864a36e20ff422006bfd1deb41b8a84713f1df57167af056,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721415527382695535,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85949570dee989a82fdcd94cbe491c
22,},Annotations:map[string]string{io.kubernetes.container.hash: 9cb43d85,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64edcccff1ef0315c913f1b9f8faf0f09b13456c8de36295b52578d5fde4ac43,PodSandboxId:a13fee91701e93e490f3cc3d5d0860972fb91c4d6049309fc0866ce8b54c0db3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721415527368954252,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb76a4932ccceb91526779cec095f7c5,},Annotation
s:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321b6c480a3713de53fd2e1bc97ebd21ede0deb84be28bc71b31c95c9496e5a0,PodSandboxId:7a6e0f8671b91f2a441b89dfbd988618dfd444d45ec9b1232013550e9a3d5295,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721415527339802479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1f2459a592675837a715c978418f030,
},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d45496ab592ce2cd7517e1885f2249bea0a1add7d091cdd9e5519ce84c40c831,PodSandboxId:8d8e7cba76b9224d692f705da3e4ec80fe3f8771027d2425cfbebc9289031ff2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721415527324140974,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-269079,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39e57bc97afce52cf75404d8daa3ce28,},Annotations:m
ap[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f1dfc1dc-a755-4e33-9a96-51a416559de7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	894bd871a6c3f       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   9d8f06d1f03f5       busybox-fc5497c4f-6fcln
	46fc9907bacdb       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      4 minutes ago       Running             kindnet-cni               1                   c571d80547cde       kindnet-k2tw6
	e533ceba794b5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   1601505f0629c       coredns-7db6d8ff4d-xccsr
	2a9c6c04f5ffc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   62d16e85274e1       storage-provisioner
	d5f5e7486a1a2       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      4 minutes ago       Running             kube-proxy                1                   8bf0f78fac48b       kube-proxy-sthg6
	06cf7dbdff630       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            1                   6db352d6aa071       kube-scheduler-multinode-269079
	1d157f5058913       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   36e84396da070       etcd-multinode-269079
	94bbb6dcaa9e8       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            1                   02664b942fba6       kube-apiserver-multinode-269079
	3fbead3e67529       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   1                   b76dd8c5ae47e       kube-controller-manager-multinode-269079
	d14d23b67d466       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   d1c000b9a2b62       busybox-fc5497c4f-6fcln
	a8482fd072f2f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   ffd82047fa6c0       coredns-7db6d8ff4d-xccsr
	49a43bbb0873d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   74e7c4e8e9b24       storage-provisioner
	e75ab08afcd7a       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    10 minutes ago      Exited              kindnet-cni               0                   1047c18ae3d8f       kindnet-k2tw6
	e71f310f55a55       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      10 minutes ago      Exited              kube-proxy                0                   f2e597e3254d2       kube-proxy-sthg6
	68f992a866a01       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      10 minutes ago      Exited              etcd                      0                   340f9bce574ea       etcd-multinode-269079
	64edcccff1ef0       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      10 minutes ago      Exited              kube-scheduler            0                   a13fee91701e9       kube-scheduler-multinode-269079
	321b6c480a371       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      10 minutes ago      Exited              kube-controller-manager   0                   7a6e0f8671b91       kube-controller-manager-multinode-269079
	d45496ab592ce       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      10 minutes ago      Exited              kube-apiserver            0                   8d8e7cba76b92       kube-apiserver-multinode-269079
	
	
	==> coredns [a8482fd072f2f3c18b107fac23e85f1f42e11a3d9ea24051b7ea40e5174964c6] <==
	[INFO] 10.244.1.2:56033 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001743583s
	[INFO] 10.244.1.2:35255 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099513s
	[INFO] 10.244.1.2:52053 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081038s
	[INFO] 10.244.1.2:55400 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00101875s
	[INFO] 10.244.1.2:51083 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000069199s
	[INFO] 10.244.1.2:54959 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000086704s
	[INFO] 10.244.1.2:45609 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075s
	[INFO] 10.244.0.3:41084 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106625s
	[INFO] 10.244.0.3:34425 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000059884s
	[INFO] 10.244.0.3:46778 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000040101s
	[INFO] 10.244.0.3:59356 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000043828s
	[INFO] 10.244.1.2:40199 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013749s
	[INFO] 10.244.1.2:44693 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097438s
	[INFO] 10.244.1.2:52337 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087198s
	[INFO] 10.244.1.2:55280 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075693s
	[INFO] 10.244.0.3:47207 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000072508s
	[INFO] 10.244.0.3:57558 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000118504s
	[INFO] 10.244.0.3:33453 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00006221s
	[INFO] 10.244.0.3:47856 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000069179s
	[INFO] 10.244.1.2:54516 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129515s
	[INFO] 10.244.1.2:37969 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000111945s
	[INFO] 10.244.1.2:48556 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000108189s
	[INFO] 10.244.1.2:50848 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000103317s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e533ceba794b58bc3b02f0a58da4e04f4ce86956f27cdae2e26db00006a10614] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:33230 - 52201 "HINFO IN 806961655920265643.1201622078205259199. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.009922152s
	
	
	==> describe nodes <==
	Name:               multinode-269079
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-269079
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221
	                    minikube.k8s.io/name=multinode-269079
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T18_58_53_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 18:58:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-269079
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 19:09:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 19:05:40 +0000   Fri, 19 Jul 2024 18:58:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 19:05:40 +0000   Fri, 19 Jul 2024 18:58:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 19:05:40 +0000   Fri, 19 Jul 2024 18:58:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 19:05:40 +0000   Fri, 19 Jul 2024 18:59:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.154
	  Hostname:    multinode-269079
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 315647a683dc44f58933c98e307c48ab
	  System UUID:                315647a6-83dc-44f5-8933-c98e307c48ab
	  Boot ID:                    17ab67e5-8d67-437b-bbc2-3cd5e64f7683
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-6fcln                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m33s
	  kube-system                 coredns-7db6d8ff4d-xccsr                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-269079                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-k2tw6                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-269079             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-269079    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-sthg6                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-269079             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m3s                 kube-proxy       
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)    kubelet          Node multinode-269079 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)    kubelet          Node multinode-269079 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)    kubelet          Node multinode-269079 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    10m                  kubelet          Node multinode-269079 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                  kubelet          Node multinode-269079 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     10m                  kubelet          Node multinode-269079 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-269079 event: Registered Node multinode-269079 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-269079 status is now: NodeReady
	  Normal  Starting                 4m9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m9s (x8 over 4m9s)  kubelet          Node multinode-269079 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s (x8 over 4m9s)  kubelet          Node multinode-269079 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s (x7 over 4m9s)  kubelet          Node multinode-269079 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m52s                node-controller  Node multinode-269079 event: Registered Node multinode-269079 in Controller
	
	
	Name:               multinode-269079-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-269079-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221
	                    minikube.k8s.io/name=multinode-269079
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T19_06_21_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 19:06:20 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-269079-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 19:07:22 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 19 Jul 2024 19:06:51 +0000   Fri, 19 Jul 2024 19:08:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 19 Jul 2024 19:06:51 +0000   Fri, 19 Jul 2024 19:08:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 19 Jul 2024 19:06:51 +0000   Fri, 19 Jul 2024 19:08:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 19 Jul 2024 19:06:51 +0000   Fri, 19 Jul 2024 19:08:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.133
	  Hostname:    multinode-269079-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d1adbe5e8ade44e8a3c64a0ac3f667b1
	  System UUID:                d1adbe5e-8ade-44e8-a3c6-4a0ac3f667b1
	  Boot ID:                    1e2288ad-9b1f-49f0-b14b-584f458a8d2c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-np9b9    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 kindnet-fvjg5              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m55s
	  kube-system                 kube-proxy-drtlz           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m20s                  kube-proxy       
	  Normal  Starting                 9m50s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m56s (x2 over 9m56s)  kubelet          Node multinode-269079-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m56s (x2 over 9m56s)  kubelet          Node multinode-269079-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m56s (x2 over 9m56s)  kubelet          Node multinode-269079-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m35s                  kubelet          Node multinode-269079-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m25s (x2 over 3m25s)  kubelet          Node multinode-269079-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m25s (x2 over 3m25s)  kubelet          Node multinode-269079-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m25s (x2 over 3m25s)  kubelet          Node multinode-269079-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m5s                   kubelet          Node multinode-269079-m02 status is now: NodeReady
	  Normal  NodeNotReady             101s                   node-controller  Node multinode-269079-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.055017] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057090] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.189970] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.124166] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.250379] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +3.959978] systemd-fstab-generator[757]: Ignoring "noauto" option for root device
	[  +3.864064] systemd-fstab-generator[934]: Ignoring "noauto" option for root device
	[  +0.058652] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.476582] systemd-fstab-generator[1269]: Ignoring "noauto" option for root device
	[  +0.072761] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.813464] kauditd_printk_skb: 18 callbacks suppressed
	[Jul19 18:59] systemd-fstab-generator[1451]: Ignoring "noauto" option for root device
	[  +5.144801] kauditd_printk_skb: 57 callbacks suppressed
	[Jul19 19:00] kauditd_printk_skb: 14 callbacks suppressed
	[Jul19 19:05] systemd-fstab-generator[2792]: Ignoring "noauto" option for root device
	[  +0.136932] systemd-fstab-generator[2804]: Ignoring "noauto" option for root device
	[  +0.174082] systemd-fstab-generator[2818]: Ignoring "noauto" option for root device
	[  +0.141141] systemd-fstab-generator[2830]: Ignoring "noauto" option for root device
	[  +0.259670] systemd-fstab-generator[2858]: Ignoring "noauto" option for root device
	[  +0.655461] systemd-fstab-generator[2957]: Ignoring "noauto" option for root device
	[  +2.067107] systemd-fstab-generator[3081]: Ignoring "noauto" option for root device
	[  +4.654071] kauditd_printk_skb: 184 callbacks suppressed
	[ +12.754769] kauditd_printk_skb: 32 callbacks suppressed
	[  +2.220194] systemd-fstab-generator[3921]: Ignoring "noauto" option for root device
	[Jul19 19:06] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [1d157f505891344f357ec23eb8fe312255a86a35c7b54c8ee387ccf0c9128b13] <==
	{"level":"info","ts":"2024-07-19T19:05:38.056109Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 switched to configuration voters=(1223707007001805620)"}
	{"level":"info","ts":"2024-07-19T19:05:38.056226Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bd4b2769e12dd4ff","local-member-id":"10fb7b0a157fc334","added-peer-id":"10fb7b0a157fc334","added-peer-peer-urls":["https://192.168.39.154:2380"]}
	{"level":"info","ts":"2024-07-19T19:05:38.056387Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bd4b2769e12dd4ff","local-member-id":"10fb7b0a157fc334","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T19:05:38.05643Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T19:05:38.080718Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-19T19:05:38.080924Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"10fb7b0a157fc334","initial-advertise-peer-urls":["https://192.168.39.154:2380"],"listen-peer-urls":["https://192.168.39.154:2380"],"advertise-client-urls":["https://192.168.39.154:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.154:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-19T19:05:38.080969Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-19T19:05:38.081128Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.154:2380"}
	{"level":"info","ts":"2024-07-19T19:05:38.081186Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.154:2380"}
	{"level":"info","ts":"2024-07-19T19:05:39.492408Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-19T19:05:39.492514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-19T19:05:39.492605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 received MsgPreVoteResp from 10fb7b0a157fc334 at term 2"}
	{"level":"info","ts":"2024-07-19T19:05:39.492645Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 became candidate at term 3"}
	{"level":"info","ts":"2024-07-19T19:05:39.492669Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 received MsgVoteResp from 10fb7b0a157fc334 at term 3"}
	{"level":"info","ts":"2024-07-19T19:05:39.492707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 became leader at term 3"}
	{"level":"info","ts":"2024-07-19T19:05:39.492737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 10fb7b0a157fc334 elected leader 10fb7b0a157fc334 at term 3"}
	{"level":"info","ts":"2024-07-19T19:05:39.498031Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T19:05:39.498056Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"10fb7b0a157fc334","local-member-attributes":"{Name:multinode-269079 ClientURLs:[https://192.168.39.154:2379]}","request-path":"/0/members/10fb7b0a157fc334/attributes","cluster-id":"bd4b2769e12dd4ff","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-19T19:05:39.499114Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T19:05:39.499778Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T19:05:39.499838Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-19T19:05:39.500603Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-19T19:05:39.501478Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.154:2379"}
	{"level":"info","ts":"2024-07-19T19:06:25.189556Z","caller":"traceutil/trace.go:171","msg":"trace[483146368] transaction","detail":"{read_only:false; response_revision:1089; number_of_response:1; }","duration":"112.914404ms","start":"2024-07-19T19:06:25.076616Z","end":"2024-07-19T19:06:25.18953Z","steps":["trace[483146368] 'process raft request'  (duration: 112.79748ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T19:07:04.136311Z","caller":"traceutil/trace.go:171","msg":"trace[200758484] transaction","detail":"{read_only:false; response_revision:1183; number_of_response:1; }","duration":"108.045242ms","start":"2024-07-19T19:07:04.028241Z","end":"2024-07-19T19:07:04.136286Z","steps":["trace[200758484] 'process raft request'  (duration: 107.183081ms)"],"step_count":1}
	
	
	==> etcd [68f992a866a01221fd939d28856ca7ec579717b7fe20a4c90e583139e69f1c48] <==
	{"level":"info","ts":"2024-07-19T18:58:48.513633Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T18:58:48.513772Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T18:58:48.516199Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T18:58:48.51623Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-19T18:58:48.51753Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.154:2379"}
	{"level":"info","ts":"2024-07-19T18:58:48.522888Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bd4b2769e12dd4ff","local-member-id":"10fb7b0a157fc334","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T18:58:48.523025Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T18:58:48.523057Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T18:58:48.52333Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-19T18:59:50.059342Z","caller":"traceutil/trace.go:171","msg":"trace[1305887973] transaction","detail":"{read_only:false; response_revision:492; number_of_response:1; }","duration":"111.265968ms","start":"2024-07-19T18:59:49.948042Z","end":"2024-07-19T18:59:50.059308Z","steps":["trace[1305887973] 'process raft request'  (duration: 111.140281ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T18:59:50.197837Z","caller":"traceutil/trace.go:171","msg":"trace[563691110] transaction","detail":"{read_only:false; response_revision:493; number_of_response:1; }","duration":"193.382139ms","start":"2024-07-19T18:59:50.004438Z","end":"2024-07-19T18:59:50.19782Z","steps":["trace[563691110] 'process raft request'  (duration: 187.99245ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T18:59:50.198004Z","caller":"traceutil/trace.go:171","msg":"trace[1315561461] transaction","detail":"{read_only:false; response_revision:494; number_of_response:1; }","duration":"134.732511ms","start":"2024-07-19T18:59:50.063256Z","end":"2024-07-19T18:59:50.197989Z","steps":["trace[1315561461] 'process raft request'  (duration: 134.383464ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T19:00:00.689527Z","caller":"traceutil/trace.go:171","msg":"trace[130620928] transaction","detail":"{read_only:false; response_revision:544; number_of_response:1; }","duration":"101.793106ms","start":"2024-07-19T19:00:00.587719Z","end":"2024-07-19T19:00:00.689512Z","steps":["trace[130620928] 'process raft request'  (duration: 100.775946ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T19:00:47.504524Z","caller":"traceutil/trace.go:171","msg":"trace[1374524115] transaction","detail":"{read_only:false; response_revision:631; number_of_response:1; }","duration":"242.988867ms","start":"2024-07-19T19:00:47.261516Z","end":"2024-07-19T19:00:47.504505Z","steps":["trace[1374524115] 'process raft request'  (duration: 227.469159ms)","trace[1374524115] 'compare'  (duration: 15.408424ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T19:00:47.505223Z","caller":"traceutil/trace.go:171","msg":"trace[1588051002] transaction","detail":"{read_only:false; response_revision:632; number_of_response:1; }","duration":"177.068172ms","start":"2024-07-19T19:00:47.328142Z","end":"2024-07-19T19:00:47.50521Z","steps":["trace[1588051002] 'process raft request'  (duration: 176.754897ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T19:04:01.980085Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-19T19:04:01.983819Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-269079","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.154:2380"],"advertise-client-urls":["https://192.168.39.154:2379"]}
	{"level":"warn","ts":"2024-07-19T19:04:01.984009Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.154:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T19:04:01.984048Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.154:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T19:04:01.984124Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T19:04:01.984246Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-19T19:04:02.045999Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"10fb7b0a157fc334","current-leader-member-id":"10fb7b0a157fc334"}
	{"level":"info","ts":"2024-07-19T19:04:02.048435Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.154:2380"}
	{"level":"info","ts":"2024-07-19T19:04:02.048691Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.154:2380"}
	{"level":"info","ts":"2024-07-19T19:04:02.048713Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-269079","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.154:2380"],"advertise-client-urls":["https://192.168.39.154:2379"]}
	
	
	==> kernel <==
	 19:09:45 up 11 min,  0 users,  load average: 0.24, 0.20, 0.11
	Linux multinode-269079 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [46fc9907bacdb8a8cfa273acb3043b189442ae850273319b4ace7655cbcde8ad] <==
	I0719 19:08:42.458364       1 main.go:322] Node multinode-269079-m02 has CIDR [10.244.1.0/24] 
	I0719 19:08:52.457998       1 main.go:295] Handling node with IPs: map[192.168.39.154:{}]
	I0719 19:08:52.458096       1 main.go:299] handling current node
	I0719 19:08:52.458124       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0719 19:08:52.458130       1 main.go:322] Node multinode-269079-m02 has CIDR [10.244.1.0/24] 
	I0719 19:09:02.463631       1 main.go:295] Handling node with IPs: map[192.168.39.154:{}]
	I0719 19:09:02.463745       1 main.go:299] handling current node
	I0719 19:09:02.463774       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0719 19:09:02.463792       1 main.go:322] Node multinode-269079-m02 has CIDR [10.244.1.0/24] 
	I0719 19:09:12.458331       1 main.go:295] Handling node with IPs: map[192.168.39.154:{}]
	I0719 19:09:12.458453       1 main.go:299] handling current node
	I0719 19:09:12.458482       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0719 19:09:12.458500       1 main.go:322] Node multinode-269079-m02 has CIDR [10.244.1.0/24] 
	I0719 19:09:22.466523       1 main.go:295] Handling node with IPs: map[192.168.39.154:{}]
	I0719 19:09:22.466568       1 main.go:299] handling current node
	I0719 19:09:22.466586       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0719 19:09:22.466595       1 main.go:322] Node multinode-269079-m02 has CIDR [10.244.1.0/24] 
	I0719 19:09:32.463758       1 main.go:295] Handling node with IPs: map[192.168.39.154:{}]
	I0719 19:09:32.463801       1 main.go:299] handling current node
	I0719 19:09:32.463828       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0719 19:09:32.463833       1 main.go:322] Node multinode-269079-m02 has CIDR [10.244.1.0/24] 
	I0719 19:09:42.457927       1 main.go:295] Handling node with IPs: map[192.168.39.154:{}]
	I0719 19:09:42.458116       1 main.go:299] handling current node
	I0719 19:09:42.458207       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0719 19:09:42.458232       1 main.go:322] Node multinode-269079-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [e75ab08afcd7afa95ec628bc9bf04c07e01d635090c69e2bb19057e80bd5ff66] <==
	I0719 19:03:21.457615       1 main.go:299] handling current node
	I0719 19:03:31.462343       1 main.go:295] Handling node with IPs: map[192.168.39.154:{}]
	I0719 19:03:31.462397       1 main.go:299] handling current node
	I0719 19:03:31.462424       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0719 19:03:31.462429       1 main.go:322] Node multinode-269079-m02 has CIDR [10.244.1.0/24] 
	I0719 19:03:31.462580       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0719 19:03:31.462599       1 main.go:322] Node multinode-269079-m03 has CIDR [10.244.3.0/24] 
	I0719 19:03:41.464804       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0719 19:03:41.464991       1 main.go:322] Node multinode-269079-m03 has CIDR [10.244.3.0/24] 
	I0719 19:03:41.465244       1 main.go:295] Handling node with IPs: map[192.168.39.154:{}]
	I0719 19:03:41.465280       1 main.go:299] handling current node
	I0719 19:03:41.465325       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0719 19:03:41.465343       1 main.go:322] Node multinode-269079-m02 has CIDR [10.244.1.0/24] 
	I0719 19:03:51.465074       1 main.go:295] Handling node with IPs: map[192.168.39.154:{}]
	I0719 19:03:51.465345       1 main.go:299] handling current node
	I0719 19:03:51.465392       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0719 19:03:51.465413       1 main.go:322] Node multinode-269079-m02 has CIDR [10.244.1.0/24] 
	I0719 19:03:51.465615       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0719 19:03:51.465650       1 main.go:322] Node multinode-269079-m03 has CIDR [10.244.3.0/24] 
	I0719 19:04:01.465307       1 main.go:295] Handling node with IPs: map[192.168.39.133:{}]
	I0719 19:04:01.465359       1 main.go:322] Node multinode-269079-m02 has CIDR [10.244.1.0/24] 
	I0719 19:04:01.465488       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0719 19:04:01.465509       1 main.go:322] Node multinode-269079-m03 has CIDR [10.244.3.0/24] 
	I0719 19:04:01.465563       1 main.go:295] Handling node with IPs: map[192.168.39.154:{}]
	I0719 19:04:01.465591       1 main.go:299] handling current node
	
	
	==> kube-apiserver [94bbb6dcaa9e8481e24ce8e895f693b8120a5acc59ce7a192da7a5a743d080de] <==
	I0719 19:05:40.758921       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0719 19:05:40.763562       1 shared_informer.go:320] Caches are synced for configmaps
	I0719 19:05:40.767239       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0719 19:05:40.767310       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0719 19:05:40.770370       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0719 19:05:40.791692       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0719 19:05:40.796020       1 aggregator.go:165] initial CRD sync complete...
	I0719 19:05:40.796065       1 autoregister_controller.go:141] Starting autoregister controller
	I0719 19:05:40.796072       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0719 19:05:40.796081       1 cache.go:39] Caches are synced for autoregister controller
	I0719 19:05:40.829061       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0719 19:05:40.829106       1 policy_source.go:224] refreshing policies
	E0719 19:05:40.856750       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0719 19:05:40.860776       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0719 19:05:40.862139       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0719 19:05:40.877887       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0719 19:05:40.882196       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0719 19:05:41.674885       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0719 19:05:42.468105       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0719 19:05:42.578968       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0719 19:05:42.590762       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0719 19:05:42.659601       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0719 19:05:42.667101       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0719 19:05:53.912126       1 controller.go:615] quota admission added evaluator for: endpoints
	I0719 19:05:54.006197       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [d45496ab592ce2cd7517e1885f2249bea0a1add7d091cdd9e5519ce84c40c831] <==
	I0719 18:58:50.981001       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0719 18:58:50.985141       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0719 18:58:50.985208       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0719 18:58:51.548518       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0719 18:58:51.593923       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0719 18:58:51.697114       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0719 18:58:51.702896       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.154]
	I0719 18:58:51.703812       1 controller.go:615] quota admission added evaluator for: endpoints
	I0719 18:58:51.707803       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0719 18:58:52.042789       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0719 18:58:52.934999       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0719 18:58:52.949403       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0719 18:58:52.963991       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0719 18:59:05.800239       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0719 18:59:05.999793       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0719 19:00:16.565434       1 conn.go:339] Error on socket receive: read tcp 192.168.39.154:8443->192.168.39.1:40278: use of closed network connection
	E0719 19:00:16.722840       1 conn.go:339] Error on socket receive: read tcp 192.168.39.154:8443->192.168.39.1:40296: use of closed network connection
	E0719 19:00:16.884231       1 conn.go:339] Error on socket receive: read tcp 192.168.39.154:8443->192.168.39.1:40312: use of closed network connection
	E0719 19:00:17.213354       1 conn.go:339] Error on socket receive: read tcp 192.168.39.154:8443->192.168.39.1:40334: use of closed network connection
	E0719 19:00:17.370117       1 conn.go:339] Error on socket receive: read tcp 192.168.39.154:8443->192.168.39.1:40362: use of closed network connection
	E0719 19:00:17.629997       1 conn.go:339] Error on socket receive: read tcp 192.168.39.154:8443->192.168.39.1:40382: use of closed network connection
	E0719 19:00:17.789347       1 conn.go:339] Error on socket receive: read tcp 192.168.39.154:8443->192.168.39.1:40402: use of closed network connection
	E0719 19:00:17.943338       1 conn.go:339] Error on socket receive: read tcp 192.168.39.154:8443->192.168.39.1:40426: use of closed network connection
	E0719 19:00:18.104143       1 conn.go:339] Error on socket receive: read tcp 192.168.39.154:8443->192.168.39.1:40430: use of closed network connection
	I0719 19:04:01.979843       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	
	==> kube-controller-manager [321b6c480a3713de53fd2e1bc97ebd21ede0deb84be28bc71b31c95c9496e5a0] <==
	I0719 18:59:50.201045       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-269079-m02\" does not exist"
	I0719 18:59:50.238090       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-269079-m02" podCIDRs=["10.244.1.0/24"]
	I0719 18:59:50.314427       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-269079-m02"
	I0719 19:00:10.055349       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-269079-m02"
	I0719 19:00:12.523099       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.594502ms"
	I0719 19:00:12.544196       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.436318ms"
	I0719 19:00:12.544336       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.417µs"
	I0719 19:00:12.555079       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.276µs"
	I0719 19:00:16.046444       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.475799ms"
	I0719 19:00:16.047220       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.372µs"
	I0719 19:00:16.168414       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.536343ms"
	I0719 19:00:16.169211       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.657µs"
	I0719 19:00:47.507630       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-269079-m03\" does not exist"
	I0719 19:00:47.507862       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-269079-m02"
	I0719 19:00:47.525525       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-269079-m03" podCIDRs=["10.244.2.0/24"]
	I0719 19:00:50.332332       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-269079-m03"
	I0719 19:01:07.876636       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-269079-m02"
	I0719 19:01:35.748244       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-269079-m02"
	I0719 19:01:37.184399       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-269079-m03\" does not exist"
	I0719 19:01:37.186804       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-269079-m02"
	I0719 19:01:37.206679       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-269079-m03" podCIDRs=["10.244.3.0/24"]
	I0719 19:01:56.599555       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-269079-m03"
	I0719 19:02:35.388315       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-269079-m03"
	I0719 19:02:35.424658       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.08126ms"
	I0719 19:02:35.424733       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.198µs"
	
	
	==> kube-controller-manager [3fbead3e675296f48b31a6bbd35249117e32374c47b3f82855779e42234a4d62] <==
	I0719 19:06:20.912960       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-269079-m02\" does not exist"
	I0719 19:06:20.922778       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-269079-m02" podCIDRs=["10.244.1.0/24"]
	I0719 19:06:22.809484       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.308µs"
	I0719 19:06:22.882288       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.322µs"
	I0719 19:06:22.902260       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.352µs"
	I0719 19:06:22.902430       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.903µs"
	I0719 19:06:22.907298       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.703µs"
	I0719 19:06:23.608966       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.177µs"
	I0719 19:06:40.707732       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-269079-m02"
	I0719 19:06:40.728842       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.131µs"
	I0719 19:06:40.742765       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.255µs"
	I0719 19:06:44.936226       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.848612ms"
	I0719 19:06:44.938335       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="85.857µs"
	I0719 19:06:58.534043       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-269079-m02"
	I0719 19:06:59.773005       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-269079-m03\" does not exist"
	I0719 19:06:59.774450       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-269079-m02"
	I0719 19:06:59.785898       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-269079-m03" podCIDRs=["10.244.2.0/24"]
	I0719 19:07:18.923133       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-269079-m02"
	I0719 19:07:23.937821       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-269079-m02"
	I0719 19:08:04.050843       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.103465ms"
	I0719 19:08:04.051486       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.242µs"
	I0719 19:08:13.933524       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-zqdz5"
	I0719 19:08:13.957546       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-zqdz5"
	I0719 19:08:13.957721       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-dd6vj"
	I0719 19:08:13.983686       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-dd6vj"
	
	
	==> kube-proxy [d5f5e7486a1a2d48f6df68e16b9fb442d5f76c20906fbef9f20a2126380085e0] <==
	I0719 19:05:41.586554       1 server_linux.go:69] "Using iptables proxy"
	I0719 19:05:41.610244       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.154"]
	I0719 19:05:41.650291       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 19:05:41.650347       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 19:05:41.650363       1 server_linux.go:165] "Using iptables Proxier"
	I0719 19:05:41.653843       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 19:05:41.654083       1 server.go:872] "Version info" version="v1.30.3"
	I0719 19:05:41.654248       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 19:05:41.658136       1 config.go:192] "Starting service config controller"
	I0719 19:05:41.658258       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 19:05:41.658342       1 config.go:101] "Starting endpoint slice config controller"
	I0719 19:05:41.658427       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 19:05:41.659677       1 config.go:319] "Starting node config controller"
	I0719 19:05:41.659714       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 19:05:41.761254       1 shared_informer.go:320] Caches are synced for service config
	I0719 19:05:41.761315       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 19:05:41.761241       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e71f310f55a559649349c09e79956008833bf0af02c07e8a146bb6869cef652e] <==
	I0719 18:59:07.453006       1 server_linux.go:69] "Using iptables proxy"
	I0719 18:59:07.464502       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.154"]
	I0719 18:59:07.497198       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 18:59:07.497224       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 18:59:07.497238       1 server_linux.go:165] "Using iptables Proxier"
	I0719 18:59:07.499411       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 18:59:07.499655       1 server.go:872] "Version info" version="v1.30.3"
	I0719 18:59:07.499666       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 18:59:07.500969       1 config.go:192] "Starting service config controller"
	I0719 18:59:07.501050       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 18:59:07.501098       1 config.go:101] "Starting endpoint slice config controller"
	I0719 18:59:07.501115       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 18:59:07.501633       1 config.go:319] "Starting node config controller"
	I0719 18:59:07.502578       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 18:59:07.601237       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 18:59:07.601247       1 shared_informer.go:320] Caches are synced for service config
	I0719 18:59:07.602951       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [06cf7dbdff6300c5e36fdabbbedbe294040c0b9f43818b301d3fb996c076e525] <==
	I0719 19:05:38.833048       1 serving.go:380] Generated self-signed cert in-memory
	W0719 19:05:40.707667       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0719 19:05:40.708297       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 19:05:40.708358       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0719 19:05:40.708383       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0719 19:05:40.770960       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0719 19:05:40.773476       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 19:05:40.786784       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0719 19:05:40.787909       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0719 19:05:40.791845       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 19:05:40.787935       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0719 19:05:40.892630       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [64edcccff1ef0315c913f1b9f8faf0f09b13456c8de36295b52578d5fde4ac43] <==
	W0719 18:58:50.081528       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0719 18:58:50.081550       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0719 18:58:50.081594       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 18:58:50.081617       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 18:58:50.081662       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 18:58:50.081684       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0719 18:58:50.081729       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0719 18:58:50.081750       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0719 18:58:50.937018       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0719 18:58:50.937188       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0719 18:58:51.034745       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0719 18:58:51.034875       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0719 18:58:51.094815       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0719 18:58:51.094899       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0719 18:58:51.122460       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0719 18:58:51.122600       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0719 18:58:51.267735       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 18:58:51.267847       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 18:58:51.389609       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 18:58:51.389727       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0719 18:58:53.563649       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 19:04:01.982130       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0719 19:04:01.984029       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0719 19:04:01.984404       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0719 19:04:01.985477       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 19 19:05:40 multinode-269079 kubelet[3088]: I0719 19:05:40.920218    3088 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 19 19:05:40 multinode-269079 kubelet[3088]: I0719 19:05:40.928231    3088 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c03ec807-2f25-4220-aade-72b3b186cbff-cni-cfg\") pod \"kindnet-k2tw6\" (UID: \"c03ec807-2f25-4220-aade-72b3b186cbff\") " pod="kube-system/kindnet-k2tw6"
	Jul 19 19:05:40 multinode-269079 kubelet[3088]: I0719 19:05:40.928315    3088 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c03ec807-2f25-4220-aade-72b3b186cbff-xtables-lock\") pod \"kindnet-k2tw6\" (UID: \"c03ec807-2f25-4220-aade-72b3b186cbff\") " pod="kube-system/kindnet-k2tw6"
	Jul 19 19:05:40 multinode-269079 kubelet[3088]: I0719 19:05:40.928435    3088 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c03ec807-2f25-4220-aade-72b3b186cbff-lib-modules\") pod \"kindnet-k2tw6\" (UID: \"c03ec807-2f25-4220-aade-72b3b186cbff\") " pod="kube-system/kindnet-k2tw6"
	Jul 19 19:05:46 multinode-269079 kubelet[3088]: I0719 19:05:46.074953    3088 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 19 19:06:36 multinode-269079 kubelet[3088]: E0719 19:06:36.855569    3088 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 19:06:36 multinode-269079 kubelet[3088]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 19:06:36 multinode-269079 kubelet[3088]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 19:06:36 multinode-269079 kubelet[3088]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 19:06:36 multinode-269079 kubelet[3088]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 19:07:36 multinode-269079 kubelet[3088]: E0719 19:07:36.860512    3088 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 19:07:36 multinode-269079 kubelet[3088]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 19:07:36 multinode-269079 kubelet[3088]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 19:07:36 multinode-269079 kubelet[3088]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 19:07:36 multinode-269079 kubelet[3088]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 19:08:36 multinode-269079 kubelet[3088]: E0719 19:08:36.855740    3088 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 19:08:36 multinode-269079 kubelet[3088]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 19:08:36 multinode-269079 kubelet[3088]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 19:08:36 multinode-269079 kubelet[3088]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 19:08:36 multinode-269079 kubelet[3088]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 19:09:36 multinode-269079 kubelet[3088]: E0719 19:09:36.852399    3088 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 19:09:36 multinode-269079 kubelet[3088]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 19:09:36 multinode-269079 kubelet[3088]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 19:09:36 multinode-269079 kubelet[3088]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 19:09:36 multinode-269079 kubelet[3088]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 19:09:44.778332   53097 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19307-14841/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-269079 -n multinode-269079
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-269079 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.25s)

                                                
                                    
x
+
TestPreload (179.1s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-364383 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0719 19:14:43.525133   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/functional-788436/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-364383 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m40.70710525s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-364383 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-364383 image pull gcr.io/k8s-minikube/busybox: (2.783048118s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-364383
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-364383: (7.547159677s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-364383 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-364383 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m5.315731458s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-364383 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-07-19 19:16:35.997651926 +0000 UTC m=+3770.261414928
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-364383 -n test-preload-364383
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-364383 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-364383 logs -n 25: (1.004580385s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-269079 ssh -n                                                                 | multinode-269079     | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | multinode-269079-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-269079 ssh -n multinode-269079 sudo cat                                       | multinode-269079     | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | /home/docker/cp-test_multinode-269079-m03_multinode-269079.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-269079 cp multinode-269079-m03:/home/docker/cp-test.txt                       | multinode-269079     | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | multinode-269079-m02:/home/docker/cp-test_multinode-269079-m03_multinode-269079-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-269079 ssh -n                                                                 | multinode-269079     | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | multinode-269079-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-269079 ssh -n multinode-269079-m02 sudo cat                                   | multinode-269079     | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | /home/docker/cp-test_multinode-269079-m03_multinode-269079-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-269079 node stop m03                                                          | multinode-269079     | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	| node    | multinode-269079 node start                                                             | multinode-269079     | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC | 19 Jul 24 19:01 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-269079                                                                | multinode-269079     | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC |                     |
	| stop    | -p multinode-269079                                                                     | multinode-269079     | jenkins | v1.33.1 | 19 Jul 24 19:01 UTC |                     |
	| start   | -p multinode-269079                                                                     | multinode-269079     | jenkins | v1.33.1 | 19 Jul 24 19:04 UTC | 19 Jul 24 19:07 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-269079                                                                | multinode-269079     | jenkins | v1.33.1 | 19 Jul 24 19:07 UTC |                     |
	| node    | multinode-269079 node delete                                                            | multinode-269079     | jenkins | v1.33.1 | 19 Jul 24 19:07 UTC | 19 Jul 24 19:07 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-269079 stop                                                                   | multinode-269079     | jenkins | v1.33.1 | 19 Jul 24 19:07 UTC |                     |
	| start   | -p multinode-269079                                                                     | multinode-269079     | jenkins | v1.33.1 | 19 Jul 24 19:09 UTC | 19 Jul 24 19:12 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-269079                                                                | multinode-269079     | jenkins | v1.33.1 | 19 Jul 24 19:12 UTC |                     |
	| start   | -p multinode-269079-m02                                                                 | multinode-269079-m02 | jenkins | v1.33.1 | 19 Jul 24 19:12 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-269079-m03                                                                 | multinode-269079-m03 | jenkins | v1.33.1 | 19 Jul 24 19:12 UTC | 19 Jul 24 19:13 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-269079                                                                 | multinode-269079     | jenkins | v1.33.1 | 19 Jul 24 19:13 UTC |                     |
	| delete  | -p multinode-269079-m03                                                                 | multinode-269079-m03 | jenkins | v1.33.1 | 19 Jul 24 19:13 UTC | 19 Jul 24 19:13 UTC |
	| delete  | -p multinode-269079                                                                     | multinode-269079     | jenkins | v1.33.1 | 19 Jul 24 19:13 UTC | 19 Jul 24 19:13 UTC |
	| start   | -p test-preload-364383                                                                  | test-preload-364383  | jenkins | v1.33.1 | 19 Jul 24 19:13 UTC | 19 Jul 24 19:15 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-364383 image pull                                                          | test-preload-364383  | jenkins | v1.33.1 | 19 Jul 24 19:15 UTC | 19 Jul 24 19:15 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-364383                                                                  | test-preload-364383  | jenkins | v1.33.1 | 19 Jul 24 19:15 UTC | 19 Jul 24 19:15 UTC |
	| start   | -p test-preload-364383                                                                  | test-preload-364383  | jenkins | v1.33.1 | 19 Jul 24 19:15 UTC | 19 Jul 24 19:16 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-364383 image list                                                          | test-preload-364383  | jenkins | v1.33.1 | 19 Jul 24 19:16 UTC | 19 Jul 24 19:16 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 19:15:30
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 19:15:30.505515   55563 out.go:291] Setting OutFile to fd 1 ...
	I0719 19:15:30.505766   55563 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:15:30.505774   55563 out.go:304] Setting ErrFile to fd 2...
	I0719 19:15:30.505778   55563 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:15:30.505936   55563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 19:15:30.506420   55563 out.go:298] Setting JSON to false
	I0719 19:15:30.507228   55563 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7073,"bootTime":1721409457,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 19:15:30.507283   55563 start.go:139] virtualization: kvm guest
	I0719 19:15:30.509285   55563 out.go:177] * [test-preload-364383] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 19:15:30.510989   55563 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 19:15:30.510994   55563 notify.go:220] Checking for updates...
	I0719 19:15:30.513380   55563 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 19:15:30.514584   55563 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:15:30.515935   55563 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 19:15:30.517123   55563 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 19:15:30.518204   55563 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 19:15:30.519715   55563 config.go:182] Loaded profile config "test-preload-364383": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0719 19:15:30.520145   55563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 19:15:30.520178   55563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:15:30.534599   55563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33653
	I0719 19:15:30.535026   55563 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:15:30.535557   55563 main.go:141] libmachine: Using API Version  1
	I0719 19:15:30.535582   55563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:15:30.535964   55563 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:15:30.536156   55563 main.go:141] libmachine: (test-preload-364383) Calling .DriverName
	I0719 19:15:30.537913   55563 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0719 19:15:30.539335   55563 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 19:15:30.539631   55563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 19:15:30.539665   55563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:15:30.554005   55563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46621
	I0719 19:15:30.554424   55563 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:15:30.554921   55563 main.go:141] libmachine: Using API Version  1
	I0719 19:15:30.554942   55563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:15:30.555280   55563 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:15:30.555466   55563 main.go:141] libmachine: (test-preload-364383) Calling .DriverName
	I0719 19:15:30.590430   55563 out.go:177] * Using the kvm2 driver based on existing profile
	I0719 19:15:30.591524   55563 start.go:297] selected driver: kvm2
	I0719 19:15:30.591541   55563 start.go:901] validating driver "kvm2" against &{Name:test-preload-364383 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-364383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:15:30.591644   55563 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 19:15:30.592430   55563 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 19:15:30.592493   55563 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19307-14841/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 19:15:30.607231   55563 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 19:15:30.607562   55563 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 19:15:30.607622   55563 cni.go:84] Creating CNI manager for ""
	I0719 19:15:30.607635   55563 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:15:30.607689   55563 start.go:340] cluster config:
	{Name:test-preload-364383 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-364383 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:15:30.607794   55563 iso.go:125] acquiring lock: {Name:mka126a3710ab8a115eb2a3299b735524430e5f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 19:15:30.610416   55563 out.go:177] * Starting "test-preload-364383" primary control-plane node in "test-preload-364383" cluster
	I0719 19:15:30.611699   55563 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0719 19:15:31.014797   55563 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0719 19:15:31.014855   55563 cache.go:56] Caching tarball of preloaded images
	I0719 19:15:31.015086   55563 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0719 19:15:31.016932   55563 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0719 19:15:31.018307   55563 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0719 19:15:31.125098   55563 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0719 19:15:42.505568   55563 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0719 19:15:42.505658   55563 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0719 19:15:43.344820   55563 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0719 19:15:43.344952   55563 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/test-preload-364383/config.json ...
	I0719 19:15:43.345175   55563 start.go:360] acquireMachinesLock for test-preload-364383: {Name:mk45aeee801587237bb965fa260501e9fac6f233 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 19:15:43.345250   55563 start.go:364] duration metric: took 55.536µs to acquireMachinesLock for "test-preload-364383"
	I0719 19:15:43.345270   55563 start.go:96] Skipping create...Using existing machine configuration
	I0719 19:15:43.345280   55563 fix.go:54] fixHost starting: 
	I0719 19:15:43.345580   55563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 19:15:43.345615   55563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:15:43.360596   55563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44277
	I0719 19:15:43.361025   55563 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:15:43.361463   55563 main.go:141] libmachine: Using API Version  1
	I0719 19:15:43.361484   55563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:15:43.361864   55563 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:15:43.362059   55563 main.go:141] libmachine: (test-preload-364383) Calling .DriverName
	I0719 19:15:43.362200   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetState
	I0719 19:15:43.363769   55563 fix.go:112] recreateIfNeeded on test-preload-364383: state=Stopped err=<nil>
	I0719 19:15:43.363798   55563 main.go:141] libmachine: (test-preload-364383) Calling .DriverName
	W0719 19:15:43.363951   55563 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 19:15:43.365993   55563 out.go:177] * Restarting existing kvm2 VM for "test-preload-364383" ...
	I0719 19:15:43.367536   55563 main.go:141] libmachine: (test-preload-364383) Calling .Start
	I0719 19:15:43.367771   55563 main.go:141] libmachine: (test-preload-364383) Ensuring networks are active...
	I0719 19:15:43.368368   55563 main.go:141] libmachine: (test-preload-364383) Ensuring network default is active
	I0719 19:15:43.368860   55563 main.go:141] libmachine: (test-preload-364383) Ensuring network mk-test-preload-364383 is active
	I0719 19:15:43.369289   55563 main.go:141] libmachine: (test-preload-364383) Getting domain xml...
	I0719 19:15:43.370048   55563 main.go:141] libmachine: (test-preload-364383) Creating domain...
	I0719 19:15:44.557042   55563 main.go:141] libmachine: (test-preload-364383) Waiting to get IP...
	I0719 19:15:44.557965   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:15:44.558397   55563 main.go:141] libmachine: (test-preload-364383) DBG | unable to find current IP address of domain test-preload-364383 in network mk-test-preload-364383
	I0719 19:15:44.558452   55563 main.go:141] libmachine: (test-preload-364383) DBG | I0719 19:15:44.558368   55646 retry.go:31] will retry after 248.692539ms: waiting for machine to come up
	I0719 19:15:44.808915   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:15:44.809347   55563 main.go:141] libmachine: (test-preload-364383) DBG | unable to find current IP address of domain test-preload-364383 in network mk-test-preload-364383
	I0719 19:15:44.809380   55563 main.go:141] libmachine: (test-preload-364383) DBG | I0719 19:15:44.809272   55646 retry.go:31] will retry after 324.535533ms: waiting for machine to come up
	I0719 19:15:45.135723   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:15:45.136178   55563 main.go:141] libmachine: (test-preload-364383) DBG | unable to find current IP address of domain test-preload-364383 in network mk-test-preload-364383
	I0719 19:15:45.136201   55563 main.go:141] libmachine: (test-preload-364383) DBG | I0719 19:15:45.136133   55646 retry.go:31] will retry after 482.155342ms: waiting for machine to come up
	I0719 19:15:45.619768   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:15:45.620184   55563 main.go:141] libmachine: (test-preload-364383) DBG | unable to find current IP address of domain test-preload-364383 in network mk-test-preload-364383
	I0719 19:15:45.620203   55563 main.go:141] libmachine: (test-preload-364383) DBG | I0719 19:15:45.620155   55646 retry.go:31] will retry after 501.159583ms: waiting for machine to come up
	I0719 19:15:46.122756   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:15:46.123141   55563 main.go:141] libmachine: (test-preload-364383) DBG | unable to find current IP address of domain test-preload-364383 in network mk-test-preload-364383
	I0719 19:15:46.123175   55563 main.go:141] libmachine: (test-preload-364383) DBG | I0719 19:15:46.123063   55646 retry.go:31] will retry after 669.67533ms: waiting for machine to come up
	I0719 19:15:46.793931   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:15:46.794290   55563 main.go:141] libmachine: (test-preload-364383) DBG | unable to find current IP address of domain test-preload-364383 in network mk-test-preload-364383
	I0719 19:15:46.794314   55563 main.go:141] libmachine: (test-preload-364383) DBG | I0719 19:15:46.794249   55646 retry.go:31] will retry after 859.468486ms: waiting for machine to come up
	I0719 19:15:47.655427   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:15:47.655829   55563 main.go:141] libmachine: (test-preload-364383) DBG | unable to find current IP address of domain test-preload-364383 in network mk-test-preload-364383
	I0719 19:15:47.655875   55563 main.go:141] libmachine: (test-preload-364383) DBG | I0719 19:15:47.655786   55646 retry.go:31] will retry after 1.161055631s: waiting for machine to come up
	I0719 19:15:48.818789   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:15:48.819225   55563 main.go:141] libmachine: (test-preload-364383) DBG | unable to find current IP address of domain test-preload-364383 in network mk-test-preload-364383
	I0719 19:15:48.819273   55563 main.go:141] libmachine: (test-preload-364383) DBG | I0719 19:15:48.819181   55646 retry.go:31] will retry after 1.210271806s: waiting for machine to come up
	I0719 19:15:50.031438   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:15:50.031767   55563 main.go:141] libmachine: (test-preload-364383) DBG | unable to find current IP address of domain test-preload-364383 in network mk-test-preload-364383
	I0719 19:15:50.031794   55563 main.go:141] libmachine: (test-preload-364383) DBG | I0719 19:15:50.031718   55646 retry.go:31] will retry after 1.463334267s: waiting for machine to come up
	I0719 19:15:51.497042   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:15:51.497515   55563 main.go:141] libmachine: (test-preload-364383) DBG | unable to find current IP address of domain test-preload-364383 in network mk-test-preload-364383
	I0719 19:15:51.497546   55563 main.go:141] libmachine: (test-preload-364383) DBG | I0719 19:15:51.497463   55646 retry.go:31] will retry after 1.608314623s: waiting for machine to come up
	I0719 19:15:53.108340   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:15:53.108768   55563 main.go:141] libmachine: (test-preload-364383) DBG | unable to find current IP address of domain test-preload-364383 in network mk-test-preload-364383
	I0719 19:15:53.108803   55563 main.go:141] libmachine: (test-preload-364383) DBG | I0719 19:15:53.108711   55646 retry.go:31] will retry after 2.219546521s: waiting for machine to come up
	I0719 19:15:55.330361   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:15:55.330804   55563 main.go:141] libmachine: (test-preload-364383) DBG | unable to find current IP address of domain test-preload-364383 in network mk-test-preload-364383
	I0719 19:15:55.330830   55563 main.go:141] libmachine: (test-preload-364383) DBG | I0719 19:15:55.330752   55646 retry.go:31] will retry after 3.575556399s: waiting for machine to come up
	I0719 19:15:58.907812   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:15:58.908329   55563 main.go:141] libmachine: (test-preload-364383) Found IP for machine: 192.168.39.194
	I0719 19:15:58.908347   55563 main.go:141] libmachine: (test-preload-364383) Reserving static IP address...
	I0719 19:15:58.908357   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has current primary IP address 192.168.39.194 and MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:15:58.908745   55563 main.go:141] libmachine: (test-preload-364383) DBG | found host DHCP lease matching {name: "test-preload-364383", mac: "52:54:00:72:51:55", ip: "192.168.39.194"} in network mk-test-preload-364383: {Iface:virbr1 ExpiryTime:2024-07-19 20:15:53 +0000 UTC Type:0 Mac:52:54:00:72:51:55 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-364383 Clientid:01:52:54:00:72:51:55}
	I0719 19:15:58.908772   55563 main.go:141] libmachine: (test-preload-364383) Reserved static IP address: 192.168.39.194
	I0719 19:15:58.908807   55563 main.go:141] libmachine: (test-preload-364383) DBG | skip adding static IP to network mk-test-preload-364383 - found existing host DHCP lease matching {name: "test-preload-364383", mac: "52:54:00:72:51:55", ip: "192.168.39.194"}
	I0719 19:15:58.908821   55563 main.go:141] libmachine: (test-preload-364383) DBG | Getting to WaitForSSH function...
	I0719 19:15:58.908833   55563 main.go:141] libmachine: (test-preload-364383) Waiting for SSH to be available...
	I0719 19:15:58.910862   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:15:58.911192   55563 main.go:141] libmachine: (test-preload-364383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:51:55", ip: ""} in network mk-test-preload-364383: {Iface:virbr1 ExpiryTime:2024-07-19 20:15:53 +0000 UTC Type:0 Mac:52:54:00:72:51:55 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-364383 Clientid:01:52:54:00:72:51:55}
	I0719 19:15:58.911222   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined IP address 192.168.39.194 and MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:15:58.911296   55563 main.go:141] libmachine: (test-preload-364383) DBG | Using SSH client type: external
	I0719 19:15:58.911325   55563 main.go:141] libmachine: (test-preload-364383) DBG | Using SSH private key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/test-preload-364383/id_rsa (-rw-------)
	I0719 19:15:58.911353   55563 main.go:141] libmachine: (test-preload-364383) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.194 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19307-14841/.minikube/machines/test-preload-364383/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 19:15:58.911365   55563 main.go:141] libmachine: (test-preload-364383) DBG | About to run SSH command:
	I0719 19:15:58.911377   55563 main.go:141] libmachine: (test-preload-364383) DBG | exit 0
	I0719 19:15:59.035559   55563 main.go:141] libmachine: (test-preload-364383) DBG | SSH cmd err, output: <nil>: 
	I0719 19:15:59.036011   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetConfigRaw
	I0719 19:15:59.036688   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetIP
	I0719 19:15:59.039014   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:15:59.039341   55563 main.go:141] libmachine: (test-preload-364383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:51:55", ip: ""} in network mk-test-preload-364383: {Iface:virbr1 ExpiryTime:2024-07-19 20:15:53 +0000 UTC Type:0 Mac:52:54:00:72:51:55 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-364383 Clientid:01:52:54:00:72:51:55}
	I0719 19:15:59.039374   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined IP address 192.168.39.194 and MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:15:59.039554   55563 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/test-preload-364383/config.json ...
	I0719 19:15:59.039715   55563 machine.go:94] provisionDockerMachine start ...
	I0719 19:15:59.039731   55563 main.go:141] libmachine: (test-preload-364383) Calling .DriverName
	I0719 19:15:59.039965   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHHostname
	I0719 19:15:59.042040   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:15:59.042294   55563 main.go:141] libmachine: (test-preload-364383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:51:55", ip: ""} in network mk-test-preload-364383: {Iface:virbr1 ExpiryTime:2024-07-19 20:15:53 +0000 UTC Type:0 Mac:52:54:00:72:51:55 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-364383 Clientid:01:52:54:00:72:51:55}
	I0719 19:15:59.042320   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined IP address 192.168.39.194 and MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:15:59.042422   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHPort
	I0719 19:15:59.042569   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHKeyPath
	I0719 19:15:59.042678   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHKeyPath
	I0719 19:15:59.042827   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHUsername
	I0719 19:15:59.042961   55563 main.go:141] libmachine: Using SSH client type: native
	I0719 19:15:59.043187   55563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0719 19:15:59.043204   55563 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 19:15:59.147725   55563 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 19:15:59.147759   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetMachineName
	I0719 19:15:59.148028   55563 buildroot.go:166] provisioning hostname "test-preload-364383"
	I0719 19:15:59.148058   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetMachineName
	I0719 19:15:59.148276   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHHostname
	I0719 19:15:59.150952   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:15:59.151287   55563 main.go:141] libmachine: (test-preload-364383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:51:55", ip: ""} in network mk-test-preload-364383: {Iface:virbr1 ExpiryTime:2024-07-19 20:15:53 +0000 UTC Type:0 Mac:52:54:00:72:51:55 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-364383 Clientid:01:52:54:00:72:51:55}
	I0719 19:15:59.151306   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined IP address 192.168.39.194 and MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:15:59.151444   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHPort
	I0719 19:15:59.151639   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHKeyPath
	I0719 19:15:59.151776   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHKeyPath
	I0719 19:15:59.151935   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHUsername
	I0719 19:15:59.152064   55563 main.go:141] libmachine: Using SSH client type: native
	I0719 19:15:59.152260   55563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0719 19:15:59.152274   55563 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-364383 && echo "test-preload-364383" | sudo tee /etc/hostname
	I0719 19:15:59.272907   55563 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-364383
	
	I0719 19:15:59.272934   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHHostname
	I0719 19:15:59.275477   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:15:59.275783   55563 main.go:141] libmachine: (test-preload-364383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:51:55", ip: ""} in network mk-test-preload-364383: {Iface:virbr1 ExpiryTime:2024-07-19 20:15:53 +0000 UTC Type:0 Mac:52:54:00:72:51:55 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-364383 Clientid:01:52:54:00:72:51:55}
	I0719 19:15:59.275814   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined IP address 192.168.39.194 and MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:15:59.275950   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHPort
	I0719 19:15:59.276134   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHKeyPath
	I0719 19:15:59.276282   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHKeyPath
	I0719 19:15:59.276443   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHUsername
	I0719 19:15:59.276609   55563 main.go:141] libmachine: Using SSH client type: native
	I0719 19:15:59.276811   55563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0719 19:15:59.276835   55563 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-364383' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-364383/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-364383' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 19:15:59.391448   55563 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:15:59.391480   55563 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 19:15:59.391536   55563 buildroot.go:174] setting up certificates
	I0719 19:15:59.391550   55563 provision.go:84] configureAuth start
	I0719 19:15:59.391561   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetMachineName
	I0719 19:15:59.391855   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetIP
	I0719 19:15:59.394184   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:15:59.394509   55563 main.go:141] libmachine: (test-preload-364383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:51:55", ip: ""} in network mk-test-preload-364383: {Iface:virbr1 ExpiryTime:2024-07-19 20:15:53 +0000 UTC Type:0 Mac:52:54:00:72:51:55 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-364383 Clientid:01:52:54:00:72:51:55}
	I0719 19:15:59.394529   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined IP address 192.168.39.194 and MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:15:59.394657   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHHostname
	I0719 19:15:59.396567   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:15:59.396889   55563 main.go:141] libmachine: (test-preload-364383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:51:55", ip: ""} in network mk-test-preload-364383: {Iface:virbr1 ExpiryTime:2024-07-19 20:15:53 +0000 UTC Type:0 Mac:52:54:00:72:51:55 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-364383 Clientid:01:52:54:00:72:51:55}
	I0719 19:15:59.396915   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined IP address 192.168.39.194 and MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:15:59.397030   55563 provision.go:143] copyHostCerts
	I0719 19:15:59.397096   55563 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 19:15:59.397108   55563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 19:15:59.397176   55563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 19:15:59.397259   55563 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 19:15:59.397266   55563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 19:15:59.397289   55563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 19:15:59.397388   55563 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 19:15:59.397398   55563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 19:15:59.397420   55563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 19:15:59.397468   55563 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.test-preload-364383 san=[127.0.0.1 192.168.39.194 localhost minikube test-preload-364383]
	I0719 19:15:59.551805   55563 provision.go:177] copyRemoteCerts
	I0719 19:15:59.551904   55563 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 19:15:59.551939   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHHostname
	I0719 19:15:59.554330   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:15:59.554639   55563 main.go:141] libmachine: (test-preload-364383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:51:55", ip: ""} in network mk-test-preload-364383: {Iface:virbr1 ExpiryTime:2024-07-19 20:15:53 +0000 UTC Type:0 Mac:52:54:00:72:51:55 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-364383 Clientid:01:52:54:00:72:51:55}
	I0719 19:15:59.554678   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined IP address 192.168.39.194 and MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:15:59.554837   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHPort
	I0719 19:15:59.555040   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHKeyPath
	I0719 19:15:59.555208   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHUsername
	I0719 19:15:59.555314   55563 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/test-preload-364383/id_rsa Username:docker}
	I0719 19:15:59.637523   55563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 19:15:59.659170   55563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0719 19:15:59.679630   55563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 19:15:59.699891   55563 provision.go:87] duration metric: took 308.326998ms to configureAuth
	I0719 19:15:59.699918   55563 buildroot.go:189] setting minikube options for container-runtime
	I0719 19:15:59.700086   55563 config.go:182] Loaded profile config "test-preload-364383": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0719 19:15:59.700150   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHHostname
	I0719 19:15:59.702781   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:15:59.703080   55563 main.go:141] libmachine: (test-preload-364383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:51:55", ip: ""} in network mk-test-preload-364383: {Iface:virbr1 ExpiryTime:2024-07-19 20:15:53 +0000 UTC Type:0 Mac:52:54:00:72:51:55 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-364383 Clientid:01:52:54:00:72:51:55}
	I0719 19:15:59.703111   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined IP address 192.168.39.194 and MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:15:59.703280   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHPort
	I0719 19:15:59.703437   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHKeyPath
	I0719 19:15:59.703582   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHKeyPath
	I0719 19:15:59.703728   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHUsername
	I0719 19:15:59.703960   55563 main.go:141] libmachine: Using SSH client type: native
	I0719 19:15:59.704208   55563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0719 19:15:59.704235   55563 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 19:15:59.964002   55563 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 19:15:59.964027   55563 machine.go:97] duration metric: took 924.299791ms to provisionDockerMachine
	I0719 19:15:59.964041   55563 start.go:293] postStartSetup for "test-preload-364383" (driver="kvm2")
	I0719 19:15:59.964054   55563 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 19:15:59.964077   55563 main.go:141] libmachine: (test-preload-364383) Calling .DriverName
	I0719 19:15:59.964390   55563 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 19:15:59.964422   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHHostname
	I0719 19:15:59.966875   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:15:59.967190   55563 main.go:141] libmachine: (test-preload-364383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:51:55", ip: ""} in network mk-test-preload-364383: {Iface:virbr1 ExpiryTime:2024-07-19 20:15:53 +0000 UTC Type:0 Mac:52:54:00:72:51:55 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-364383 Clientid:01:52:54:00:72:51:55}
	I0719 19:15:59.967218   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined IP address 192.168.39.194 and MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:15:59.967321   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHPort
	I0719 19:15:59.967502   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHKeyPath
	I0719 19:15:59.967650   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHUsername
	I0719 19:15:59.967764   55563 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/test-preload-364383/id_rsa Username:docker}
	I0719 19:16:00.050126   55563 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 19:16:00.053925   55563 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 19:16:00.053945   55563 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 19:16:00.054030   55563 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 19:16:00.054107   55563 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 19:16:00.054187   55563 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 19:16:00.062704   55563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:16:00.083729   55563 start.go:296] duration metric: took 119.676619ms for postStartSetup
	I0719 19:16:00.083768   55563 fix.go:56] duration metric: took 16.738487926s for fixHost
	I0719 19:16:00.083795   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHHostname
	I0719 19:16:00.086197   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:16:00.086521   55563 main.go:141] libmachine: (test-preload-364383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:51:55", ip: ""} in network mk-test-preload-364383: {Iface:virbr1 ExpiryTime:2024-07-19 20:15:53 +0000 UTC Type:0 Mac:52:54:00:72:51:55 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-364383 Clientid:01:52:54:00:72:51:55}
	I0719 19:16:00.086550   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined IP address 192.168.39.194 and MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:16:00.086727   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHPort
	I0719 19:16:00.086922   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHKeyPath
	I0719 19:16:00.087069   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHKeyPath
	I0719 19:16:00.087224   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHUsername
	I0719 19:16:00.087360   55563 main.go:141] libmachine: Using SSH client type: native
	I0719 19:16:00.087510   55563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0719 19:16:00.087519   55563 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 19:16:00.191941   55563 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721416560.166244957
	
	I0719 19:16:00.191968   55563 fix.go:216] guest clock: 1721416560.166244957
	I0719 19:16:00.191980   55563 fix.go:229] Guest: 2024-07-19 19:16:00.166244957 +0000 UTC Remote: 2024-07-19 19:16:00.08377348 +0000 UTC m=+29.611874800 (delta=82.471477ms)
	I0719 19:16:00.192011   55563 fix.go:200] guest clock delta is within tolerance: 82.471477ms
	I0719 19:16:00.192020   55563 start.go:83] releasing machines lock for "test-preload-364383", held for 16.846757502s
	I0719 19:16:00.192045   55563 main.go:141] libmachine: (test-preload-364383) Calling .DriverName
	I0719 19:16:00.192290   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetIP
	I0719 19:16:00.194842   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:16:00.195162   55563 main.go:141] libmachine: (test-preload-364383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:51:55", ip: ""} in network mk-test-preload-364383: {Iface:virbr1 ExpiryTime:2024-07-19 20:15:53 +0000 UTC Type:0 Mac:52:54:00:72:51:55 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-364383 Clientid:01:52:54:00:72:51:55}
	I0719 19:16:00.195198   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined IP address 192.168.39.194 and MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:16:00.195387   55563 main.go:141] libmachine: (test-preload-364383) Calling .DriverName
	I0719 19:16:00.195799   55563 main.go:141] libmachine: (test-preload-364383) Calling .DriverName
	I0719 19:16:00.196024   55563 main.go:141] libmachine: (test-preload-364383) Calling .DriverName
	I0719 19:16:00.196105   55563 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 19:16:00.196160   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHHostname
	I0719 19:16:00.196210   55563 ssh_runner.go:195] Run: cat /version.json
	I0719 19:16:00.196228   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHHostname
	I0719 19:16:00.198914   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:16:00.199063   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:16:00.199277   55563 main.go:141] libmachine: (test-preload-364383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:51:55", ip: ""} in network mk-test-preload-364383: {Iface:virbr1 ExpiryTime:2024-07-19 20:15:53 +0000 UTC Type:0 Mac:52:54:00:72:51:55 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-364383 Clientid:01:52:54:00:72:51:55}
	I0719 19:16:00.199308   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined IP address 192.168.39.194 and MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:16:00.199443   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHPort
	I0719 19:16:00.199450   55563 main.go:141] libmachine: (test-preload-364383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:51:55", ip: ""} in network mk-test-preload-364383: {Iface:virbr1 ExpiryTime:2024-07-19 20:15:53 +0000 UTC Type:0 Mac:52:54:00:72:51:55 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-364383 Clientid:01:52:54:00:72:51:55}
	I0719 19:16:00.199476   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined IP address 192.168.39.194 and MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:16:00.199591   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHKeyPath
	I0719 19:16:00.199659   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHPort
	I0719 19:16:00.199759   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHUsername
	I0719 19:16:00.199817   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHKeyPath
	I0719 19:16:00.199894   55563 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/test-preload-364383/id_rsa Username:docker}
	I0719 19:16:00.199953   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHUsername
	I0719 19:16:00.200107   55563 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/test-preload-364383/id_rsa Username:docker}
	I0719 19:16:00.276234   55563 ssh_runner.go:195] Run: systemctl --version
	I0719 19:16:00.322276   55563 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 19:16:00.468956   55563 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 19:16:00.475018   55563 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 19:16:00.475084   55563 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 19:16:00.489557   55563 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 19:16:00.489581   55563 start.go:495] detecting cgroup driver to use...
	I0719 19:16:00.489628   55563 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 19:16:00.505023   55563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 19:16:00.518071   55563 docker.go:217] disabling cri-docker service (if available) ...
	I0719 19:16:00.518123   55563 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 19:16:00.530828   55563 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 19:16:00.545670   55563 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 19:16:00.658057   55563 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 19:16:00.809903   55563 docker.go:233] disabling docker service ...
	I0719 19:16:00.809979   55563 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 19:16:00.823394   55563 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 19:16:00.835609   55563 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 19:16:00.950450   55563 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 19:16:01.067446   55563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 19:16:01.080452   55563 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 19:16:01.097068   55563 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0719 19:16:01.097130   55563 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:16:01.106570   55563 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 19:16:01.106641   55563 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:16:01.116144   55563 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:16:01.125580   55563 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:16:01.134958   55563 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 19:16:01.144593   55563 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:16:01.153624   55563 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:16:01.168476   55563 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:16:01.177695   55563 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 19:16:01.186095   55563 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 19:16:01.186154   55563 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 19:16:01.197046   55563 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 19:16:01.205303   55563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:16:01.310082   55563 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 19:16:01.431062   55563 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 19:16:01.431133   55563 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 19:16:01.435619   55563 start.go:563] Will wait 60s for crictl version
	I0719 19:16:01.435661   55563 ssh_runner.go:195] Run: which crictl
	I0719 19:16:01.439027   55563 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 19:16:01.481293   55563 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 19:16:01.481370   55563 ssh_runner.go:195] Run: crio --version
	I0719 19:16:01.506512   55563 ssh_runner.go:195] Run: crio --version
	I0719 19:16:01.534422   55563 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0719 19:16:01.535653   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetIP
	I0719 19:16:01.538159   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:16:01.538489   55563 main.go:141] libmachine: (test-preload-364383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:51:55", ip: ""} in network mk-test-preload-364383: {Iface:virbr1 ExpiryTime:2024-07-19 20:15:53 +0000 UTC Type:0 Mac:52:54:00:72:51:55 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-364383 Clientid:01:52:54:00:72:51:55}
	I0719 19:16:01.538516   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined IP address 192.168.39.194 and MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:16:01.538709   55563 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 19:16:01.542376   55563 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:16:01.554099   55563 kubeadm.go:883] updating cluster {Name:test-preload-364383 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-364383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 19:16:01.554201   55563 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0719 19:16:01.554238   55563 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:16:01.587337   55563 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0719 19:16:01.587408   55563 ssh_runner.go:195] Run: which lz4
	I0719 19:16:01.591078   55563 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 19:16:01.594766   55563 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 19:16:01.594796   55563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0719 19:16:02.921053   55563 crio.go:462] duration metric: took 1.330018412s to copy over tarball
	I0719 19:16:02.921133   55563 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 19:16:05.178141   55563 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.256976946s)
	I0719 19:16:05.178166   55563 crio.go:469] duration metric: took 2.257083104s to extract the tarball
	I0719 19:16:05.178175   55563 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 19:16:05.218731   55563 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:16:05.256878   55563 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0719 19:16:05.256901   55563 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 19:16:05.256976   55563 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0719 19:16:05.257003   55563 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0719 19:16:05.257014   55563 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0719 19:16:05.257050   55563 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0719 19:16:05.257054   55563 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0719 19:16:05.257110   55563 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0719 19:16:05.257116   55563 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0719 19:16:05.256977   55563 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:16:05.258505   55563 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:16:05.258535   55563 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0719 19:16:05.258505   55563 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0719 19:16:05.258572   55563 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0719 19:16:05.258574   55563 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0719 19:16:05.258508   55563 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0719 19:16:05.258505   55563 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0719 19:16:05.258506   55563 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0719 19:16:05.451807   55563 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0719 19:16:05.488188   55563 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0719 19:16:05.488230   55563 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0719 19:16:05.488271   55563 ssh_runner.go:195] Run: which crictl
	I0719 19:16:05.491700   55563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0719 19:16:05.491755   55563 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0719 19:16:05.505117   55563 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0719 19:16:05.509124   55563 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0719 19:16:05.509580   55563 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0719 19:16:05.518270   55563 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0719 19:16:05.536133   55563 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0719 19:16:05.547940   55563 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0719 19:16:05.548031   55563 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0719 19:16:05.573312   55563 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0719 19:16:05.573355   55563 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0719 19:16:05.573399   55563 ssh_runner.go:195] Run: which crictl
	I0719 19:16:05.607920   55563 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0719 19:16:05.607954   55563 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0719 19:16:05.607990   55563 ssh_runner.go:195] Run: which crictl
	I0719 19:16:05.631095   55563 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0719 19:16:05.631128   55563 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0719 19:16:05.631169   55563 ssh_runner.go:195] Run: which crictl
	I0719 19:16:05.631194   55563 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0719 19:16:05.631234   55563 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0719 19:16:05.631282   55563 ssh_runner.go:195] Run: which crictl
	I0719 19:16:05.643638   55563 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0719 19:16:05.643667   55563 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0719 19:16:05.643723   55563 ssh_runner.go:195] Run: which crictl
	I0719 19:16:05.655236   55563 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0719 19:16:05.655264   55563 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0719 19:16:05.655272   55563 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0719 19:16:05.655284   55563 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0719 19:16:05.655295   55563 ssh_runner.go:195] Run: which crictl
	I0719 19:16:05.655329   55563 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0719 19:16:05.655349   55563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0719 19:16:05.655378   55563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0719 19:16:05.655413   55563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0719 19:16:05.655441   55563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0719 19:16:05.655460   55563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0719 19:16:05.771133   55563 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0719 19:16:05.771213   55563 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0719 19:16:06.099271   55563 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:16:08.907419   55563 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7: (3.252044178s)
	I0719 19:16:08.907473   55563 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0719 19:16:08.907471   55563 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4: (3.252119052s)
	I0719 19:16:08.907488   55563 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0719 19:16:08.907501   55563 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4: (3.252068243s)
	I0719 19:16:08.907540   55563 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6: (3.252063391s)
	I0719 19:16:08.907571   55563 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0719 19:16:08.907581   55563 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0719 19:16:08.907622   55563 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4: (3.252161817s)
	I0719 19:16:08.907655   55563 ssh_runner.go:235] Completed: which crictl: (3.25235076s)
	I0719 19:16:08.907693   55563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0719 19:16:08.907692   55563 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4: (3.136467329s)
	I0719 19:16:08.907696   55563 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0719 19:16:08.907714   55563 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0719 19:16:08.907658   55563 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0719 19:16:08.907721   55563 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0719 19:16:08.907576   55563 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0719 19:16:08.907760   55563 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0719 19:16:08.907763   55563 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0719 19:16:08.907732   55563 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.808432162s)
	I0719 19:16:08.907790   55563 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0719 19:16:08.924212   55563 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0719 19:16:08.958111   55563 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0719 19:16:08.958217   55563 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0719 19:16:08.958246   55563 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0719 19:16:08.958338   55563 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0719 19:16:09.774532   55563 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0719 19:16:09.774572   55563 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0719 19:16:09.774578   55563 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0719 19:16:09.774629   55563 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0719 19:16:09.774661   55563 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0719 19:16:10.522130   55563 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0719 19:16:10.522167   55563 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0719 19:16:10.522213   55563 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0719 19:16:10.671487   55563 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0719 19:16:10.671551   55563 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0719 19:16:10.671603   55563 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0719 19:16:11.113544   55563 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0719 19:16:11.113588   55563 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0719 19:16:11.113650   55563 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0719 19:16:11.449159   55563 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0719 19:16:11.449200   55563 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0719 19:16:11.449240   55563 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0719 19:16:13.591328   55563 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.142069925s)
	I0719 19:16:13.591369   55563 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0719 19:16:13.591394   55563 cache_images.go:123] Successfully loaded all cached images
	I0719 19:16:13.591403   55563 cache_images.go:92] duration metric: took 8.334490429s to LoadCachedImages
	I0719 19:16:13.591414   55563 kubeadm.go:934] updating node { 192.168.39.194 8443 v1.24.4 crio true true} ...
	I0719 19:16:13.591524   55563 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-364383 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.194
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-364383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 19:16:13.591611   55563 ssh_runner.go:195] Run: crio config
	I0719 19:16:13.637749   55563 cni.go:84] Creating CNI manager for ""
	I0719 19:16:13.637773   55563 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:16:13.637789   55563 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 19:16:13.637811   55563 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.194 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-364383 NodeName:test-preload-364383 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.194"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.194 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 19:16:13.637981   55563 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.194
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-364383"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.194
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.194"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 19:16:13.638052   55563 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0719 19:16:13.647095   55563 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 19:16:13.647163   55563 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 19:16:13.655488   55563 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0719 19:16:13.670548   55563 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 19:16:13.684904   55563 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0719 19:16:13.699764   55563 ssh_runner.go:195] Run: grep 192.168.39.194	control-plane.minikube.internal$ /etc/hosts
	I0719 19:16:13.703155   55563 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.194	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:16:13.713573   55563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:16:13.831134   55563 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:16:13.846855   55563 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/test-preload-364383 for IP: 192.168.39.194
	I0719 19:16:13.846873   55563 certs.go:194] generating shared ca certs ...
	I0719 19:16:13.846892   55563 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:16:13.847044   55563 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 19:16:13.847100   55563 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 19:16:13.847112   55563 certs.go:256] generating profile certs ...
	I0719 19:16:13.847210   55563 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/test-preload-364383/client.key
	I0719 19:16:13.847281   55563 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/test-preload-364383/apiserver.key.198adfdb
	I0719 19:16:13.847331   55563 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/test-preload-364383/proxy-client.key
	I0719 19:16:13.847457   55563 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 19:16:13.847490   55563 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 19:16:13.847511   55563 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 19:16:13.847542   55563 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 19:16:13.847572   55563 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 19:16:13.847602   55563 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 19:16:13.847662   55563 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:16:13.848552   55563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 19:16:13.886914   55563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 19:16:13.920504   55563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 19:16:13.958082   55563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 19:16:13.983035   55563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/test-preload-364383/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0719 19:16:14.006908   55563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/test-preload-364383/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 19:16:14.035307   55563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/test-preload-364383/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 19:16:14.069765   55563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/test-preload-364383/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 19:16:14.090790   55563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 19:16:14.111307   55563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 19:16:14.133056   55563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 19:16:14.154154   55563 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 19:16:14.169553   55563 ssh_runner.go:195] Run: openssl version
	I0719 19:16:14.174802   55563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 19:16:14.185032   55563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:16:14.189218   55563 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:16:14.189273   55563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:16:14.194491   55563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 19:16:14.204246   55563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 19:16:14.214255   55563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 19:16:14.218224   55563 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 19:16:14.218272   55563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 19:16:14.223468   55563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 19:16:14.233408   55563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 19:16:14.243440   55563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 19:16:14.247338   55563 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 19:16:14.247393   55563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 19:16:14.252671   55563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 19:16:14.262644   55563 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 19:16:14.266679   55563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 19:16:14.272363   55563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 19:16:14.277789   55563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 19:16:14.283219   55563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 19:16:14.288235   55563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 19:16:14.293284   55563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 19:16:14.298479   55563 kubeadm.go:392] StartCluster: {Name:test-preload-364383 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-364383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:16:14.298553   55563 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 19:16:14.298599   55563 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:16:14.333395   55563 cri.go:89] found id: ""
	I0719 19:16:14.333467   55563 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 19:16:14.342996   55563 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 19:16:14.343019   55563 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 19:16:14.343068   55563 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 19:16:14.351963   55563 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 19:16:14.352395   55563 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-364383" does not appear in /home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:16:14.352526   55563 kubeconfig.go:62] /home/jenkins/minikube-integration/19307-14841/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-364383" cluster setting kubeconfig missing "test-preload-364383" context setting]
	I0719 19:16:14.352774   55563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/kubeconfig: {Name:mk0bbb5f0de0e816a151363ad4427bbf9de52f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:16:14.353372   55563 kapi.go:59] client config for test-preload-364383: &rest.Config{Host:"https://192.168.39.194:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19307-14841/.minikube/profiles/test-preload-364383/client.crt", KeyFile:"/home/jenkins/minikube-integration/19307-14841/.minikube/profiles/test-preload-364383/client.key", CAFile:"/home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 19:16:14.353870   55563 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 19:16:14.362596   55563 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.194
	I0719 19:16:14.362627   55563 kubeadm.go:1160] stopping kube-system containers ...
	I0719 19:16:14.362641   55563 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 19:16:14.362705   55563 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:16:14.394699   55563 cri.go:89] found id: ""
	I0719 19:16:14.394764   55563 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 19:16:14.410642   55563 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:16:14.419709   55563 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:16:14.419728   55563 kubeadm.go:157] found existing configuration files:
	
	I0719 19:16:14.419776   55563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:16:14.427945   55563 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:16:14.427996   55563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:16:14.437100   55563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:16:14.445567   55563 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:16:14.445621   55563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:16:14.454456   55563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:16:14.462657   55563 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:16:14.462702   55563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:16:14.471117   55563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:16:14.479305   55563 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:16:14.479342   55563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:16:14.487816   55563 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:16:14.496502   55563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:16:14.583126   55563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:16:15.401234   55563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:16:15.675521   55563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:16:15.734259   55563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:16:15.850395   55563 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:16:15.850483   55563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:16:16.350554   55563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:16:16.851237   55563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:16:16.865924   55563 api_server.go:72] duration metric: took 1.015531654s to wait for apiserver process to appear ...
	I0719 19:16:16.865949   55563 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:16:16.865972   55563 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I0719 19:16:16.866414   55563 api_server.go:269] stopped: https://192.168.39.194:8443/healthz: Get "https://192.168.39.194:8443/healthz": dial tcp 192.168.39.194:8443: connect: connection refused
	I0719 19:16:17.366096   55563 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I0719 19:16:21.144818   55563 api_server.go:279] https://192.168.39.194:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:16:21.144851   55563 api_server.go:103] status: https://192.168.39.194:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:16:21.144867   55563 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I0719 19:16:21.193505   55563 api_server.go:279] https://192.168.39.194:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:16:21.193535   55563 api_server.go:103] status: https://192.168.39.194:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:16:21.366916   55563 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I0719 19:16:21.372180   55563 api_server.go:279] https://192.168.39.194:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:16:21.372209   55563 api_server.go:103] status: https://192.168.39.194:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:16:21.866847   55563 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I0719 19:16:21.871510   55563 api_server.go:279] https://192.168.39.194:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:16:21.871538   55563 api_server.go:103] status: https://192.168.39.194:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:16:22.366092   55563 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I0719 19:16:22.371519   55563 api_server.go:279] https://192.168.39.194:8443/healthz returned 200:
	ok
	I0719 19:16:22.377995   55563 api_server.go:141] control plane version: v1.24.4
	I0719 19:16:22.378020   55563 api_server.go:131] duration metric: took 5.512064204s to wait for apiserver health ...
	I0719 19:16:22.378028   55563 cni.go:84] Creating CNI manager for ""
	I0719 19:16:22.378035   55563 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:16:22.379976   55563 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 19:16:22.381404   55563 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 19:16:22.393158   55563 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 19:16:22.410356   55563 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:16:22.421184   55563 system_pods.go:59] 8 kube-system pods found
	I0719 19:16:22.421219   55563 system_pods.go:61] "coredns-6d4b75cb6d-28rgv" [0c5c6e4a-8df1-479a-b227-c9b4df49b850] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 19:16:22.421228   55563 system_pods.go:61] "coredns-6d4b75cb6d-q59pj" [0ec50d47-f3b7-46c9-b9f1-d60f70400d38] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 19:16:22.421236   55563 system_pods.go:61] "etcd-test-preload-364383" [4930e0a6-5325-435a-8483-7cdf2bd83ef3] Running
	I0719 19:16:22.421242   55563 system_pods.go:61] "kube-apiserver-test-preload-364383" [7cf19a19-fd4d-42ea-91a9-74aa5ce10445] Running
	I0719 19:16:22.421247   55563 system_pods.go:61] "kube-controller-manager-test-preload-364383" [bcfc1099-ac0a-46f3-a3d8-5b0d362b7880] Running
	I0719 19:16:22.421256   55563 system_pods.go:61] "kube-proxy-gxkpd" [517b155c-8aaf-4f15-96b9-c0651e8b28c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0719 19:16:22.421264   55563 system_pods.go:61] "kube-scheduler-test-preload-364383" [da688cbf-7417-4f56-8f9c-1d4642a26c66] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0719 19:16:22.421279   55563 system_pods.go:61] "storage-provisioner" [01219d40-adca-4c89-9414-db413aa69bdd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 19:16:22.421291   55563 system_pods.go:74] duration metric: took 10.913274ms to wait for pod list to return data ...
	I0719 19:16:22.421304   55563 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:16:22.428537   55563 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:16:22.428562   55563 node_conditions.go:123] node cpu capacity is 2
	I0719 19:16:22.428574   55563 node_conditions.go:105] duration metric: took 7.265059ms to run NodePressure ...
	I0719 19:16:22.428590   55563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:16:22.600720   55563 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 19:16:22.604544   55563 kubeadm.go:739] kubelet initialised
	I0719 19:16:22.604563   55563 kubeadm.go:740] duration metric: took 3.820663ms waiting for restarted kubelet to initialise ...
	I0719 19:16:22.604569   55563 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:16:22.609689   55563 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-28rgv" in "kube-system" namespace to be "Ready" ...
	I0719 19:16:22.614814   55563 pod_ready.go:97] node "test-preload-364383" hosting pod "coredns-6d4b75cb6d-28rgv" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-364383" has status "Ready":"False"
	I0719 19:16:22.614838   55563 pod_ready.go:81] duration metric: took 5.130376ms for pod "coredns-6d4b75cb6d-28rgv" in "kube-system" namespace to be "Ready" ...
	E0719 19:16:22.614846   55563 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-364383" hosting pod "coredns-6d4b75cb6d-28rgv" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-364383" has status "Ready":"False"
	I0719 19:16:22.614860   55563 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-q59pj" in "kube-system" namespace to be "Ready" ...
	I0719 19:16:22.618916   55563 pod_ready.go:97] node "test-preload-364383" hosting pod "coredns-6d4b75cb6d-q59pj" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-364383" has status "Ready":"False"
	I0719 19:16:22.618935   55563 pod_ready.go:81] duration metric: took 4.0688ms for pod "coredns-6d4b75cb6d-q59pj" in "kube-system" namespace to be "Ready" ...
	E0719 19:16:22.618943   55563 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-364383" hosting pod "coredns-6d4b75cb6d-q59pj" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-364383" has status "Ready":"False"
	I0719 19:16:22.618955   55563 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-364383" in "kube-system" namespace to be "Ready" ...
	I0719 19:16:22.622657   55563 pod_ready.go:97] node "test-preload-364383" hosting pod "etcd-test-preload-364383" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-364383" has status "Ready":"False"
	I0719 19:16:22.622676   55563 pod_ready.go:81] duration metric: took 3.713371ms for pod "etcd-test-preload-364383" in "kube-system" namespace to be "Ready" ...
	E0719 19:16:22.622684   55563 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-364383" hosting pod "etcd-test-preload-364383" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-364383" has status "Ready":"False"
	I0719 19:16:22.622689   55563 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-364383" in "kube-system" namespace to be "Ready" ...
	I0719 19:16:22.816897   55563 pod_ready.go:97] node "test-preload-364383" hosting pod "kube-apiserver-test-preload-364383" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-364383" has status "Ready":"False"
	I0719 19:16:22.816935   55563 pod_ready.go:81] duration metric: took 194.232678ms for pod "kube-apiserver-test-preload-364383" in "kube-system" namespace to be "Ready" ...
	E0719 19:16:22.816949   55563 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-364383" hosting pod "kube-apiserver-test-preload-364383" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-364383" has status "Ready":"False"
	I0719 19:16:22.816958   55563 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-364383" in "kube-system" namespace to be "Ready" ...
	I0719 19:16:23.216024   55563 pod_ready.go:97] node "test-preload-364383" hosting pod "kube-controller-manager-test-preload-364383" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-364383" has status "Ready":"False"
	I0719 19:16:23.216056   55563 pod_ready.go:81] duration metric: took 399.086535ms for pod "kube-controller-manager-test-preload-364383" in "kube-system" namespace to be "Ready" ...
	E0719 19:16:23.216066   55563 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-364383" hosting pod "kube-controller-manager-test-preload-364383" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-364383" has status "Ready":"False"
	I0719 19:16:23.216072   55563 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gxkpd" in "kube-system" namespace to be "Ready" ...
	I0719 19:16:23.614099   55563 pod_ready.go:97] node "test-preload-364383" hosting pod "kube-proxy-gxkpd" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-364383" has status "Ready":"False"
	I0719 19:16:23.614125   55563 pod_ready.go:81] duration metric: took 398.043874ms for pod "kube-proxy-gxkpd" in "kube-system" namespace to be "Ready" ...
	E0719 19:16:23.614134   55563 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-364383" hosting pod "kube-proxy-gxkpd" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-364383" has status "Ready":"False"
	I0719 19:16:23.614140   55563 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-364383" in "kube-system" namespace to be "Ready" ...
	I0719 19:16:24.013865   55563 pod_ready.go:97] node "test-preload-364383" hosting pod "kube-scheduler-test-preload-364383" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-364383" has status "Ready":"False"
	I0719 19:16:24.013889   55563 pod_ready.go:81] duration metric: took 399.74233ms for pod "kube-scheduler-test-preload-364383" in "kube-system" namespace to be "Ready" ...
	E0719 19:16:24.013897   55563 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-364383" hosting pod "kube-scheduler-test-preload-364383" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-364383" has status "Ready":"False"
	I0719 19:16:24.013905   55563 pod_ready.go:38] duration metric: took 1.409328502s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:16:24.013925   55563 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 19:16:24.024742   55563 ops.go:34] apiserver oom_adj: -16
	I0719 19:16:24.024763   55563 kubeadm.go:597] duration metric: took 9.681736686s to restartPrimaryControlPlane
	I0719 19:16:24.024771   55563 kubeadm.go:394] duration metric: took 9.7262978s to StartCluster
	I0719 19:16:24.024785   55563 settings.go:142] acquiring lock: {Name:mkcf95271f2ac91a55c2f4ae0f8a0ccaa24777be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:16:24.024862   55563 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:16:24.025410   55563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/kubeconfig: {Name:mk0bbb5f0de0e816a151363ad4427bbf9de52f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:16:24.025610   55563 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 19:16:24.025679   55563 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 19:16:24.025781   55563 addons.go:69] Setting storage-provisioner=true in profile "test-preload-364383"
	I0719 19:16:24.025811   55563 addons.go:234] Setting addon storage-provisioner=true in "test-preload-364383"
	W0719 19:16:24.025823   55563 addons.go:243] addon storage-provisioner should already be in state true
	I0719 19:16:24.025855   55563 host.go:66] Checking if "test-preload-364383" exists ...
	I0719 19:16:24.025866   55563 config.go:182] Loaded profile config "test-preload-364383": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0719 19:16:24.025810   55563 addons.go:69] Setting default-storageclass=true in profile "test-preload-364383"
	I0719 19:16:24.025917   55563 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-364383"
	I0719 19:16:24.026144   55563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 19:16:24.026179   55563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:16:24.026286   55563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 19:16:24.026325   55563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:16:24.028116   55563 out.go:177] * Verifying Kubernetes components...
	I0719 19:16:24.029420   55563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:16:24.041159   55563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44983
	I0719 19:16:24.041576   55563 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:16:24.042112   55563 main.go:141] libmachine: Using API Version  1
	I0719 19:16:24.042161   55563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:16:24.042168   55563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44973
	I0719 19:16:24.042524   55563 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:16:24.042553   55563 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:16:24.042727   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetState
	I0719 19:16:24.042958   55563 main.go:141] libmachine: Using API Version  1
	I0719 19:16:24.042972   55563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:16:24.043254   55563 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:16:24.043821   55563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 19:16:24.043885   55563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:16:24.045289   55563 kapi.go:59] client config for test-preload-364383: &rest.Config{Host:"https://192.168.39.194:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19307-14841/.minikube/profiles/test-preload-364383/client.crt", KeyFile:"/home/jenkins/minikube-integration/19307-14841/.minikube/profiles/test-preload-364383/client.key", CAFile:"/home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 19:16:24.045622   55563 addons.go:234] Setting addon default-storageclass=true in "test-preload-364383"
	W0719 19:16:24.045650   55563 addons.go:243] addon default-storageclass should already be in state true
	I0719 19:16:24.045679   55563 host.go:66] Checking if "test-preload-364383" exists ...
	I0719 19:16:24.046085   55563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 19:16:24.046131   55563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:16:24.058817   55563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41167
	I0719 19:16:24.059208   55563 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:16:24.059649   55563 main.go:141] libmachine: Using API Version  1
	I0719 19:16:24.059669   55563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:16:24.060036   55563 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:16:24.060232   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetState
	I0719 19:16:24.060760   55563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38503
	I0719 19:16:24.061107   55563 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:16:24.061571   55563 main.go:141] libmachine: Using API Version  1
	I0719 19:16:24.061593   55563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:16:24.061888   55563 main.go:141] libmachine: (test-preload-364383) Calling .DriverName
	I0719 19:16:24.061907   55563 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:16:24.062367   55563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 19:16:24.062403   55563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:16:24.064009   55563 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:16:24.065491   55563 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:16:24.065510   55563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 19:16:24.065528   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHHostname
	I0719 19:16:24.068200   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:16:24.068666   55563 main.go:141] libmachine: (test-preload-364383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:51:55", ip: ""} in network mk-test-preload-364383: {Iface:virbr1 ExpiryTime:2024-07-19 20:15:53 +0000 UTC Type:0 Mac:52:54:00:72:51:55 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-364383 Clientid:01:52:54:00:72:51:55}
	I0719 19:16:24.068700   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined IP address 192.168.39.194 and MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:16:24.068823   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHPort
	I0719 19:16:24.069011   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHKeyPath
	I0719 19:16:24.069149   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHUsername
	I0719 19:16:24.069316   55563 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/test-preload-364383/id_rsa Username:docker}
	I0719 19:16:24.077420   55563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41003
	I0719 19:16:24.077749   55563 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:16:24.078181   55563 main.go:141] libmachine: Using API Version  1
	I0719 19:16:24.078200   55563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:16:24.078493   55563 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:16:24.078675   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetState
	I0719 19:16:24.079989   55563 main.go:141] libmachine: (test-preload-364383) Calling .DriverName
	I0719 19:16:24.080203   55563 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 19:16:24.080217   55563 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 19:16:24.080236   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHHostname
	I0719 19:16:24.082957   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:16:24.083375   55563 main.go:141] libmachine: (test-preload-364383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:51:55", ip: ""} in network mk-test-preload-364383: {Iface:virbr1 ExpiryTime:2024-07-19 20:15:53 +0000 UTC Type:0 Mac:52:54:00:72:51:55 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:test-preload-364383 Clientid:01:52:54:00:72:51:55}
	I0719 19:16:24.083399   55563 main.go:141] libmachine: (test-preload-364383) DBG | domain test-preload-364383 has defined IP address 192.168.39.194 and MAC address 52:54:00:72:51:55 in network mk-test-preload-364383
	I0719 19:16:24.083553   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHPort
	I0719 19:16:24.083744   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHKeyPath
	I0719 19:16:24.083906   55563 main.go:141] libmachine: (test-preload-364383) Calling .GetSSHUsername
	I0719 19:16:24.084049   55563 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/test-preload-364383/id_rsa Username:docker}
	I0719 19:16:24.186856   55563 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:16:24.201368   55563 node_ready.go:35] waiting up to 6m0s for node "test-preload-364383" to be "Ready" ...
	I0719 19:16:24.294428   55563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:16:24.302865   55563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 19:16:25.150599   55563 main.go:141] libmachine: Making call to close driver server
	I0719 19:16:25.150624   55563 main.go:141] libmachine: (test-preload-364383) Calling .Close
	I0719 19:16:25.150909   55563 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:16:25.150931   55563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:16:25.150941   55563 main.go:141] libmachine: Making call to close driver server
	I0719 19:16:25.150949   55563 main.go:141] libmachine: (test-preload-364383) Calling .Close
	I0719 19:16:25.151199   55563 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:16:25.151219   55563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:16:25.151234   55563 main.go:141] libmachine: (test-preload-364383) DBG | Closing plugin on server side
	I0719 19:16:25.156932   55563 main.go:141] libmachine: Making call to close driver server
	I0719 19:16:25.156954   55563 main.go:141] libmachine: (test-preload-364383) Calling .Close
	I0719 19:16:25.157168   55563 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:16:25.157185   55563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:16:25.157199   55563 main.go:141] libmachine: Making call to close driver server
	I0719 19:16:25.157208   55563 main.go:141] libmachine: (test-preload-364383) Calling .Close
	I0719 19:16:25.157426   55563 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:16:25.157454   55563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:16:25.157469   55563 main.go:141] libmachine: (test-preload-364383) DBG | Closing plugin on server side
	I0719 19:16:25.165364   55563 main.go:141] libmachine: Making call to close driver server
	I0719 19:16:25.165382   55563 main.go:141] libmachine: (test-preload-364383) Calling .Close
	I0719 19:16:25.165602   55563 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:16:25.165621   55563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:16:25.165623   55563 main.go:141] libmachine: (test-preload-364383) DBG | Closing plugin on server side
	I0719 19:16:25.167388   55563 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0719 19:16:25.168510   55563 addons.go:510] duration metric: took 1.142840019s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0719 19:16:26.205865   55563 node_ready.go:53] node "test-preload-364383" has status "Ready":"False"
	I0719 19:16:28.705490   55563 node_ready.go:53] node "test-preload-364383" has status "Ready":"False"
	I0719 19:16:31.204889   55563 node_ready.go:53] node "test-preload-364383" has status "Ready":"False"
	I0719 19:16:31.706246   55563 node_ready.go:49] node "test-preload-364383" has status "Ready":"True"
	I0719 19:16:31.706269   55563 node_ready.go:38] duration metric: took 7.504870445s for node "test-preload-364383" to be "Ready" ...
	I0719 19:16:31.706277   55563 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:16:31.711953   55563 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-28rgv" in "kube-system" namespace to be "Ready" ...
	I0719 19:16:31.717703   55563 pod_ready.go:92] pod "coredns-6d4b75cb6d-28rgv" in "kube-system" namespace has status "Ready":"True"
	I0719 19:16:31.717727   55563 pod_ready.go:81] duration metric: took 5.750037ms for pod "coredns-6d4b75cb6d-28rgv" in "kube-system" namespace to be "Ready" ...
	I0719 19:16:31.717738   55563 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-364383" in "kube-system" namespace to be "Ready" ...
	I0719 19:16:31.721938   55563 pod_ready.go:92] pod "etcd-test-preload-364383" in "kube-system" namespace has status "Ready":"True"
	I0719 19:16:31.721956   55563 pod_ready.go:81] duration metric: took 4.21156ms for pod "etcd-test-preload-364383" in "kube-system" namespace to be "Ready" ...
	I0719 19:16:31.721964   55563 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-364383" in "kube-system" namespace to be "Ready" ...
	I0719 19:16:31.727449   55563 pod_ready.go:92] pod "kube-apiserver-test-preload-364383" in "kube-system" namespace has status "Ready":"True"
	I0719 19:16:31.727469   55563 pod_ready.go:81] duration metric: took 5.49636ms for pod "kube-apiserver-test-preload-364383" in "kube-system" namespace to be "Ready" ...
	I0719 19:16:31.727480   55563 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-364383" in "kube-system" namespace to be "Ready" ...
	I0719 19:16:31.731574   55563 pod_ready.go:92] pod "kube-controller-manager-test-preload-364383" in "kube-system" namespace has status "Ready":"True"
	I0719 19:16:31.731594   55563 pod_ready.go:81] duration metric: took 4.105757ms for pod "kube-controller-manager-test-preload-364383" in "kube-system" namespace to be "Ready" ...
	I0719 19:16:31.731605   55563 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gxkpd" in "kube-system" namespace to be "Ready" ...
	I0719 19:16:32.105382   55563 pod_ready.go:92] pod "kube-proxy-gxkpd" in "kube-system" namespace has status "Ready":"True"
	I0719 19:16:32.105407   55563 pod_ready.go:81] duration metric: took 373.795078ms for pod "kube-proxy-gxkpd" in "kube-system" namespace to be "Ready" ...
	I0719 19:16:32.105416   55563 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-364383" in "kube-system" namespace to be "Ready" ...
	I0719 19:16:34.111926   55563 pod_ready.go:102] pod "kube-scheduler-test-preload-364383" in "kube-system" namespace has status "Ready":"False"
	I0719 19:16:35.112401   55563 pod_ready.go:92] pod "kube-scheduler-test-preload-364383" in "kube-system" namespace has status "Ready":"True"
	I0719 19:16:35.112425   55563 pod_ready.go:81] duration metric: took 3.007003179s for pod "kube-scheduler-test-preload-364383" in "kube-system" namespace to be "Ready" ...
	I0719 19:16:35.112435   55563 pod_ready.go:38] duration metric: took 3.406148351s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:16:35.112447   55563 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:16:35.112501   55563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:16:35.126832   55563 api_server.go:72] duration metric: took 11.101192798s to wait for apiserver process to appear ...
	I0719 19:16:35.126858   55563 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:16:35.126877   55563 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I0719 19:16:35.131891   55563 api_server.go:279] https://192.168.39.194:8443/healthz returned 200:
	ok
	I0719 19:16:35.132717   55563 api_server.go:141] control plane version: v1.24.4
	I0719 19:16:35.132739   55563 api_server.go:131] duration metric: took 5.873475ms to wait for apiserver health ...
	I0719 19:16:35.132748   55563 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:16:35.138180   55563 system_pods.go:59] 7 kube-system pods found
	I0719 19:16:35.138201   55563 system_pods.go:61] "coredns-6d4b75cb6d-28rgv" [0c5c6e4a-8df1-479a-b227-c9b4df49b850] Running
	I0719 19:16:35.138206   55563 system_pods.go:61] "etcd-test-preload-364383" [4930e0a6-5325-435a-8483-7cdf2bd83ef3] Running
	I0719 19:16:35.138209   55563 system_pods.go:61] "kube-apiserver-test-preload-364383" [7cf19a19-fd4d-42ea-91a9-74aa5ce10445] Running
	I0719 19:16:35.138212   55563 system_pods.go:61] "kube-controller-manager-test-preload-364383" [bcfc1099-ac0a-46f3-a3d8-5b0d362b7880] Running
	I0719 19:16:35.138215   55563 system_pods.go:61] "kube-proxy-gxkpd" [517b155c-8aaf-4f15-96b9-c0651e8b28c9] Running
	I0719 19:16:35.138218   55563 system_pods.go:61] "kube-scheduler-test-preload-364383" [da688cbf-7417-4f56-8f9c-1d4642a26c66] Running
	I0719 19:16:35.138220   55563 system_pods.go:61] "storage-provisioner" [01219d40-adca-4c89-9414-db413aa69bdd] Running
	I0719 19:16:35.138225   55563 system_pods.go:74] duration metric: took 5.471584ms to wait for pod list to return data ...
	I0719 19:16:35.138231   55563 default_sa.go:34] waiting for default service account to be created ...
	I0719 19:16:35.306380   55563 default_sa.go:45] found service account: "default"
	I0719 19:16:35.306409   55563 default_sa.go:55] duration metric: took 168.17321ms for default service account to be created ...
	I0719 19:16:35.306419   55563 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 19:16:35.508265   55563 system_pods.go:86] 7 kube-system pods found
	I0719 19:16:35.508291   55563 system_pods.go:89] "coredns-6d4b75cb6d-28rgv" [0c5c6e4a-8df1-479a-b227-c9b4df49b850] Running
	I0719 19:16:35.508302   55563 system_pods.go:89] "etcd-test-preload-364383" [4930e0a6-5325-435a-8483-7cdf2bd83ef3] Running
	I0719 19:16:35.508312   55563 system_pods.go:89] "kube-apiserver-test-preload-364383" [7cf19a19-fd4d-42ea-91a9-74aa5ce10445] Running
	I0719 19:16:35.508318   55563 system_pods.go:89] "kube-controller-manager-test-preload-364383" [bcfc1099-ac0a-46f3-a3d8-5b0d362b7880] Running
	I0719 19:16:35.508322   55563 system_pods.go:89] "kube-proxy-gxkpd" [517b155c-8aaf-4f15-96b9-c0651e8b28c9] Running
	I0719 19:16:35.508325   55563 system_pods.go:89] "kube-scheduler-test-preload-364383" [da688cbf-7417-4f56-8f9c-1d4642a26c66] Running
	I0719 19:16:35.508329   55563 system_pods.go:89] "storage-provisioner" [01219d40-adca-4c89-9414-db413aa69bdd] Running
	I0719 19:16:35.508335   55563 system_pods.go:126] duration metric: took 201.910984ms to wait for k8s-apps to be running ...
	I0719 19:16:35.508341   55563 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 19:16:35.508385   55563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:16:35.521466   55563 system_svc.go:56] duration metric: took 13.11597ms WaitForService to wait for kubelet
	I0719 19:16:35.521497   55563 kubeadm.go:582] duration metric: took 11.495863746s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 19:16:35.521514   55563 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:16:35.706643   55563 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:16:35.706673   55563 node_conditions.go:123] node cpu capacity is 2
	I0719 19:16:35.706684   55563 node_conditions.go:105] duration metric: took 185.164452ms to run NodePressure ...
	I0719 19:16:35.706699   55563 start.go:241] waiting for startup goroutines ...
	I0719 19:16:35.706708   55563 start.go:246] waiting for cluster config update ...
	I0719 19:16:35.706719   55563 start.go:255] writing updated cluster config ...
	I0719 19:16:35.707014   55563 ssh_runner.go:195] Run: rm -f paused
	I0719 19:16:35.752116   55563 start.go:600] kubectl: 1.30.3, cluster: 1.24.4 (minor skew: 6)
	I0719 19:16:35.754037   55563 out.go:177] 
	W0719 19:16:35.755613   55563 out.go:239] ! /usr/local/bin/kubectl is version 1.30.3, which may have incompatibilities with Kubernetes 1.24.4.
	I0719 19:16:35.756746   55563 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0719 19:16:35.757829   55563 out.go:177] * Done! kubectl is now configured to use "test-preload-364383" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 19 19:16:36 test-preload-364383 crio[703]: time="2024-07-19 19:16:36.594789370Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721416596594768518,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3d3b4020-8482-40f5-80af-8c35436d64ae name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:16:36 test-preload-364383 crio[703]: time="2024-07-19 19:16:36.595314051Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=78d6ff7d-1031-4c6a-a178-77be4fc7fd53 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:16:36 test-preload-364383 crio[703]: time="2024-07-19 19:16:36.595386212Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=78d6ff7d-1031-4c6a-a178-77be4fc7fd53 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:16:36 test-preload-364383 crio[703]: time="2024-07-19 19:16:36.595553657Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b20b8caf65848f59e18ea36b8cb82c85d61d57806f6f0b57c7a27c55792d1824,PodSandboxId:5d9727fceb8a6c185d374b5696b54519cf4a31d89431010f74e68acc803bfec5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1721416589794774682,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-28rgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c5c6e4a-8df1-479a-b227-c9b4df49b850,},Annotations:map[string]string{io.kubernetes.container.hash: 44b087c1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0af2e1d240f7e02e875b5b9484b81f02ea3894a0d118fa88f28abedc6f8c1299,PodSandboxId:19d9cfbc04fbf3da8e79da97394db4de83e4d844f02f8af12f463c0ca01a21a9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721416583107568038,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 01219d40-adca-4c89-9414-db413aa69bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 46c6a758,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98e5570078c8410f8f8aa1594128d5984b3b1994be00798f3360672c6241aca8,PodSandboxId:9f2589195c60430c4a2be2be4c5d0fa28466cd9b1b5490fe59a068ddf082fa5f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1721416582789648434,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gxkpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51
7b155c-8aaf-4f15-96b9-c0651e8b28c9,},Annotations:map[string]string{io.kubernetes.container.hash: b4684649,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd8dc9ea20e9d82c07c6a39c2019f314d19dd6403e5fb488d6d39af7f3e079dc,PodSandboxId:d55ebdb96cb36804fa84ad95c5b52d36dd4f5c45b53bbe3224f64a0cf23d0c0f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1721416576576673233,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-364383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a17b7b4c4
689b2993d369c3be9975af8,},Annotations:map[string]string{io.kubernetes.container.hash: f7ad45d7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:340134005a12afc978561cd6dac9c6df6f4e23b1cb3e900b223678b61da2a5fc,PodSandboxId:1609e3017cf6a5c37d53611235d024fddde5605262a4b33593dbae305fdb67b8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1721416576539405803,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-364383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e5235be258e4252f1c54beb8d95437d,},Annotations:map
[string]string{io.kubernetes.container.hash: d8a2229a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf4fe24fb24c94dc72edd4718621e1421fe48226ceb311076c2a4558b7767c31,PodSandboxId:208a81c3eb4e2ac0267d98519e425e427e1b6108bc49fe773d21fedc3dfbd993,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1721416576515887866,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-364383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f489ec53da974cf76c83aac3dccb545,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad6f972dae9c0579d5dcec0b864578658745c3e87d3401a990889ec77273302,PodSandboxId:2efb29e09067046b0fac3e04edf1c851efcbb509c0922d881a1627a345c9ef99,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1721416576488127038,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-364383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c6b57c83117d4ecee6eec0500df8e88,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=78d6ff7d-1031-4c6a-a178-77be4fc7fd53 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:16:36 test-preload-364383 crio[703]: time="2024-07-19 19:16:36.629203760Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=32467b1c-482a-46b1-ad38-b08273cb009f name=/runtime.v1.RuntimeService/Version
	Jul 19 19:16:36 test-preload-364383 crio[703]: time="2024-07-19 19:16:36.629285556Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=32467b1c-482a-46b1-ad38-b08273cb009f name=/runtime.v1.RuntimeService/Version
	Jul 19 19:16:36 test-preload-364383 crio[703]: time="2024-07-19 19:16:36.630347805Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=61cc1bcd-f905-435f-b7b4-0c7e017efeb4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:16:36 test-preload-364383 crio[703]: time="2024-07-19 19:16:36.630853843Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721416596630815077,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=61cc1bcd-f905-435f-b7b4-0c7e017efeb4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:16:36 test-preload-364383 crio[703]: time="2024-07-19 19:16:36.631313624Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=12eaed42-c279-4085-b752-dd80e83f347a name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:16:36 test-preload-364383 crio[703]: time="2024-07-19 19:16:36.631379418Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=12eaed42-c279-4085-b752-dd80e83f347a name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:16:36 test-preload-364383 crio[703]: time="2024-07-19 19:16:36.631542267Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b20b8caf65848f59e18ea36b8cb82c85d61d57806f6f0b57c7a27c55792d1824,PodSandboxId:5d9727fceb8a6c185d374b5696b54519cf4a31d89431010f74e68acc803bfec5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1721416589794774682,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-28rgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c5c6e4a-8df1-479a-b227-c9b4df49b850,},Annotations:map[string]string{io.kubernetes.container.hash: 44b087c1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0af2e1d240f7e02e875b5b9484b81f02ea3894a0d118fa88f28abedc6f8c1299,PodSandboxId:19d9cfbc04fbf3da8e79da97394db4de83e4d844f02f8af12f463c0ca01a21a9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721416583107568038,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 01219d40-adca-4c89-9414-db413aa69bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 46c6a758,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98e5570078c8410f8f8aa1594128d5984b3b1994be00798f3360672c6241aca8,PodSandboxId:9f2589195c60430c4a2be2be4c5d0fa28466cd9b1b5490fe59a068ddf082fa5f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1721416582789648434,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gxkpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51
7b155c-8aaf-4f15-96b9-c0651e8b28c9,},Annotations:map[string]string{io.kubernetes.container.hash: b4684649,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd8dc9ea20e9d82c07c6a39c2019f314d19dd6403e5fb488d6d39af7f3e079dc,PodSandboxId:d55ebdb96cb36804fa84ad95c5b52d36dd4f5c45b53bbe3224f64a0cf23d0c0f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1721416576576673233,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-364383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a17b7b4c4
689b2993d369c3be9975af8,},Annotations:map[string]string{io.kubernetes.container.hash: f7ad45d7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:340134005a12afc978561cd6dac9c6df6f4e23b1cb3e900b223678b61da2a5fc,PodSandboxId:1609e3017cf6a5c37d53611235d024fddde5605262a4b33593dbae305fdb67b8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1721416576539405803,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-364383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e5235be258e4252f1c54beb8d95437d,},Annotations:map
[string]string{io.kubernetes.container.hash: d8a2229a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf4fe24fb24c94dc72edd4718621e1421fe48226ceb311076c2a4558b7767c31,PodSandboxId:208a81c3eb4e2ac0267d98519e425e427e1b6108bc49fe773d21fedc3dfbd993,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1721416576515887866,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-364383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f489ec53da974cf76c83aac3dccb545,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad6f972dae9c0579d5dcec0b864578658745c3e87d3401a990889ec77273302,PodSandboxId:2efb29e09067046b0fac3e04edf1c851efcbb509c0922d881a1627a345c9ef99,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1721416576488127038,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-364383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c6b57c83117d4ecee6eec0500df8e88,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=12eaed42-c279-4085-b752-dd80e83f347a name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:16:36 test-preload-364383 crio[703]: time="2024-07-19 19:16:36.663903751Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=beeed267-2af8-404f-a513-49b8c48228f7 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:16:36 test-preload-364383 crio[703]: time="2024-07-19 19:16:36.663984983Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=beeed267-2af8-404f-a513-49b8c48228f7 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:16:36 test-preload-364383 crio[703]: time="2024-07-19 19:16:36.665655316Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1477a793-2b30-4219-8cb9-f1ac14fcd83e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:16:36 test-preload-364383 crio[703]: time="2024-07-19 19:16:36.666085967Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721416596666062827,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1477a793-2b30-4219-8cb9-f1ac14fcd83e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:16:36 test-preload-364383 crio[703]: time="2024-07-19 19:16:36.666774656Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4cc15794-465d-4479-8323-7c8e21858758 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:16:36 test-preload-364383 crio[703]: time="2024-07-19 19:16:36.666832384Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4cc15794-465d-4479-8323-7c8e21858758 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:16:36 test-preload-364383 crio[703]: time="2024-07-19 19:16:36.667012665Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b20b8caf65848f59e18ea36b8cb82c85d61d57806f6f0b57c7a27c55792d1824,PodSandboxId:5d9727fceb8a6c185d374b5696b54519cf4a31d89431010f74e68acc803bfec5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1721416589794774682,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-28rgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c5c6e4a-8df1-479a-b227-c9b4df49b850,},Annotations:map[string]string{io.kubernetes.container.hash: 44b087c1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0af2e1d240f7e02e875b5b9484b81f02ea3894a0d118fa88f28abedc6f8c1299,PodSandboxId:19d9cfbc04fbf3da8e79da97394db4de83e4d844f02f8af12f463c0ca01a21a9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721416583107568038,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 01219d40-adca-4c89-9414-db413aa69bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 46c6a758,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98e5570078c8410f8f8aa1594128d5984b3b1994be00798f3360672c6241aca8,PodSandboxId:9f2589195c60430c4a2be2be4c5d0fa28466cd9b1b5490fe59a068ddf082fa5f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1721416582789648434,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gxkpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51
7b155c-8aaf-4f15-96b9-c0651e8b28c9,},Annotations:map[string]string{io.kubernetes.container.hash: b4684649,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd8dc9ea20e9d82c07c6a39c2019f314d19dd6403e5fb488d6d39af7f3e079dc,PodSandboxId:d55ebdb96cb36804fa84ad95c5b52d36dd4f5c45b53bbe3224f64a0cf23d0c0f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1721416576576673233,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-364383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a17b7b4c4
689b2993d369c3be9975af8,},Annotations:map[string]string{io.kubernetes.container.hash: f7ad45d7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:340134005a12afc978561cd6dac9c6df6f4e23b1cb3e900b223678b61da2a5fc,PodSandboxId:1609e3017cf6a5c37d53611235d024fddde5605262a4b33593dbae305fdb67b8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1721416576539405803,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-364383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e5235be258e4252f1c54beb8d95437d,},Annotations:map
[string]string{io.kubernetes.container.hash: d8a2229a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf4fe24fb24c94dc72edd4718621e1421fe48226ceb311076c2a4558b7767c31,PodSandboxId:208a81c3eb4e2ac0267d98519e425e427e1b6108bc49fe773d21fedc3dfbd993,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1721416576515887866,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-364383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f489ec53da974cf76c83aac3dccb545,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad6f972dae9c0579d5dcec0b864578658745c3e87d3401a990889ec77273302,PodSandboxId:2efb29e09067046b0fac3e04edf1c851efcbb509c0922d881a1627a345c9ef99,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1721416576488127038,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-364383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c6b57c83117d4ecee6eec0500df8e88,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4cc15794-465d-4479-8323-7c8e21858758 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:16:36 test-preload-364383 crio[703]: time="2024-07-19 19:16:36.696111046Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f3833413-6bf9-46e5-8157-9bfc87358e87 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:16:36 test-preload-364383 crio[703]: time="2024-07-19 19:16:36.696247560Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f3833413-6bf9-46e5-8157-9bfc87358e87 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:16:36 test-preload-364383 crio[703]: time="2024-07-19 19:16:36.697521502Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2f71e4a8-5484-44c9-9430-a353c6ae4f45 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:16:36 test-preload-364383 crio[703]: time="2024-07-19 19:16:36.698055199Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721416596698033120,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2f71e4a8-5484-44c9-9430-a353c6ae4f45 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:16:36 test-preload-364383 crio[703]: time="2024-07-19 19:16:36.698503197Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=85087292-733b-4c91-a006-8afd76da18ab name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:16:36 test-preload-364383 crio[703]: time="2024-07-19 19:16:36.698550479Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=85087292-733b-4c91-a006-8afd76da18ab name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:16:36 test-preload-364383 crio[703]: time="2024-07-19 19:16:36.698719729Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b20b8caf65848f59e18ea36b8cb82c85d61d57806f6f0b57c7a27c55792d1824,PodSandboxId:5d9727fceb8a6c185d374b5696b54519cf4a31d89431010f74e68acc803bfec5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1721416589794774682,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-28rgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c5c6e4a-8df1-479a-b227-c9b4df49b850,},Annotations:map[string]string{io.kubernetes.container.hash: 44b087c1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0af2e1d240f7e02e875b5b9484b81f02ea3894a0d118fa88f28abedc6f8c1299,PodSandboxId:19d9cfbc04fbf3da8e79da97394db4de83e4d844f02f8af12f463c0ca01a21a9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721416583107568038,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 01219d40-adca-4c89-9414-db413aa69bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 46c6a758,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98e5570078c8410f8f8aa1594128d5984b3b1994be00798f3360672c6241aca8,PodSandboxId:9f2589195c60430c4a2be2be4c5d0fa28466cd9b1b5490fe59a068ddf082fa5f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1721416582789648434,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gxkpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51
7b155c-8aaf-4f15-96b9-c0651e8b28c9,},Annotations:map[string]string{io.kubernetes.container.hash: b4684649,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd8dc9ea20e9d82c07c6a39c2019f314d19dd6403e5fb488d6d39af7f3e079dc,PodSandboxId:d55ebdb96cb36804fa84ad95c5b52d36dd4f5c45b53bbe3224f64a0cf23d0c0f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1721416576576673233,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-364383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a17b7b4c4
689b2993d369c3be9975af8,},Annotations:map[string]string{io.kubernetes.container.hash: f7ad45d7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:340134005a12afc978561cd6dac9c6df6f4e23b1cb3e900b223678b61da2a5fc,PodSandboxId:1609e3017cf6a5c37d53611235d024fddde5605262a4b33593dbae305fdb67b8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1721416576539405803,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-364383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e5235be258e4252f1c54beb8d95437d,},Annotations:map
[string]string{io.kubernetes.container.hash: d8a2229a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf4fe24fb24c94dc72edd4718621e1421fe48226ceb311076c2a4558b7767c31,PodSandboxId:208a81c3eb4e2ac0267d98519e425e427e1b6108bc49fe773d21fedc3dfbd993,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1721416576515887866,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-364383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f489ec53da974cf76c83aac3dccb545,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad6f972dae9c0579d5dcec0b864578658745c3e87d3401a990889ec77273302,PodSandboxId:2efb29e09067046b0fac3e04edf1c851efcbb509c0922d881a1627a345c9ef99,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1721416576488127038,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-364383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c6b57c83117d4ecee6eec0500df8e88,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=85087292-733b-4c91-a006-8afd76da18ab name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b20b8caf65848       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   6 seconds ago       Running             coredns                   1                   5d9727fceb8a6       coredns-6d4b75cb6d-28rgv
	0af2e1d240f7e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 seconds ago      Running             storage-provisioner       1                   19d9cfbc04fbf       storage-provisioner
	98e5570078c84       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   13 seconds ago      Running             kube-proxy                1                   9f2589195c604       kube-proxy-gxkpd
	fd8dc9ea20e9d       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   20 seconds ago      Running             kube-apiserver            1                   d55ebdb96cb36       kube-apiserver-test-preload-364383
	340134005a12a       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   20 seconds ago      Running             etcd                      1                   1609e3017cf6a       etcd-test-preload-364383
	cf4fe24fb24c9       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   20 seconds ago      Running             kube-scheduler            1                   208a81c3eb4e2       kube-scheduler-test-preload-364383
	2ad6f972dae9c       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   20 seconds ago      Running             kube-controller-manager   1                   2efb29e090670       kube-controller-manager-test-preload-364383
	
	
	==> coredns [b20b8caf65848f59e18ea36b8cb82c85d61d57806f6f0b57c7a27c55792d1824] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:57308 - 4345 "HINFO IN 7914524369235965745.2348857794318424391. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012068743s
	
	
	==> describe nodes <==
	Name:               test-preload-364383
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-364383
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221
	                    minikube.k8s.io/name=test-preload-364383
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T19_15_04_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 19:15:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-364383
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 19:16:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 19:16:31 +0000   Fri, 19 Jul 2024 19:14:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 19:16:31 +0000   Fri, 19 Jul 2024 19:14:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 19:16:31 +0000   Fri, 19 Jul 2024 19:14:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 19:16:31 +0000   Fri, 19 Jul 2024 19:16:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.194
	  Hostname:    test-preload-364383
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3504a5522dd442e6b32f25df8f66867e
	  System UUID:                3504a552-2dd4-42e6-b32f-25df8f66867e
	  Boot ID:                    5447da39-b114-481c-9d29-e8bd5eb11b0c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-28rgv                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     79s
	  kube-system                 etcd-test-preload-364383                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         92s
	  kube-system                 kube-apiserver-test-preload-364383             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-controller-manager-test-preload-364383    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-proxy-gxkpd                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 kube-scheduler-test-preload-364383             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13s                kube-proxy       
	  Normal  Starting                 77s                kube-proxy       
	  Normal  NodeHasSufficientMemory  99s (x5 over 99s)  kubelet          Node test-preload-364383 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s (x4 over 99s)  kubelet          Node test-preload-364383 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     99s (x4 over 99s)  kubelet          Node test-preload-364383 status is now: NodeHasSufficientPID
	  Normal  Starting                 92s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  92s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  92s                kubelet          Node test-preload-364383 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    92s                kubelet          Node test-preload-364383 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     92s                kubelet          Node test-preload-364383 status is now: NodeHasSufficientPID
	  Normal  NodeReady                81s                kubelet          Node test-preload-364383 status is now: NodeReady
	  Normal  RegisteredNode           79s                node-controller  Node test-preload-364383 event: Registered Node test-preload-364383 in Controller
	  Normal  Starting                 21s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20s (x8 over 21s)  kubelet          Node test-preload-364383 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 21s)  kubelet          Node test-preload-364383 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x7 over 21s)  kubelet          Node test-preload-364383 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s                 node-controller  Node test-preload-364383 event: Registered Node test-preload-364383 in Controller
	
	
	==> dmesg <==
	[Jul19 19:15] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050855] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037156] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.458587] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.697208] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.484897] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul19 19:16] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.052586] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061291] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.172842] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.120170] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.245559] systemd-fstab-generator[686]: Ignoring "noauto" option for root device
	[ +12.514959] systemd-fstab-generator[957]: Ignoring "noauto" option for root device
	[  +0.058341] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.772392] systemd-fstab-generator[1086]: Ignoring "noauto" option for root device
	[  +7.156670] kauditd_printk_skb: 105 callbacks suppressed
	[  +1.340307] systemd-fstab-generator[1713]: Ignoring "noauto" option for root device
	[  +5.539577] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [340134005a12afc978561cd6dac9c6df6f4e23b1cb3e900b223678b61da2a5fc] <==
	{"level":"info","ts":"2024-07-19T19:16:16.982Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"b4bd7d4638784c91","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-19T19:16:16.982Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-19T19:16:16.995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 switched to configuration voters=(13023703437973933201)"}
	{"level":"info","ts":"2024-07-19T19:16:16.995Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bb2ce3d66f8fb721","local-member-id":"b4bd7d4638784c91","added-peer-id":"b4bd7d4638784c91","added-peer-peer-urls":["https://192.168.39.194:2380"]}
	{"level":"info","ts":"2024-07-19T19:16:16.996Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bb2ce3d66f8fb721","local-member-id":"b4bd7d4638784c91","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T19:16:16.996Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T19:16:17.007Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-19T19:16:17.010Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.194:2380"}
	{"level":"info","ts":"2024-07-19T19:16:17.016Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.194:2380"}
	{"level":"info","ts":"2024-07-19T19:16:17.016Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b4bd7d4638784c91","initial-advertise-peer-urls":["https://192.168.39.194:2380"],"listen-peer-urls":["https://192.168.39.194:2380"],"advertise-client-urls":["https://192.168.39.194:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.194:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-19T19:16:17.016Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-19T19:16:18.721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-19T19:16:18.721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-19T19:16:18.721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 received MsgPreVoteResp from b4bd7d4638784c91 at term 2"}
	{"level":"info","ts":"2024-07-19T19:16:18.721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 became candidate at term 3"}
	{"level":"info","ts":"2024-07-19T19:16:18.721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 received MsgVoteResp from b4bd7d4638784c91 at term 3"}
	{"level":"info","ts":"2024-07-19T19:16:18.721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 became leader at term 3"}
	{"level":"info","ts":"2024-07-19T19:16:18.721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b4bd7d4638784c91 elected leader b4bd7d4638784c91 at term 3"}
	{"level":"info","ts":"2024-07-19T19:16:18.721Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"b4bd7d4638784c91","local-member-attributes":"{Name:test-preload-364383 ClientURLs:[https://192.168.39.194:2379]}","request-path":"/0/members/b4bd7d4638784c91/attributes","cluster-id":"bb2ce3d66f8fb721","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-19T19:16:18.722Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T19:16:18.722Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T19:16:18.723Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-19T19:16:18.724Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.194:2379"}
	{"level":"info","ts":"2024-07-19T19:16:18.724Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T19:16:18.724Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:16:36 up 0 min,  0 users,  load average: 0.90, 0.25, 0.08
	Linux test-preload-364383 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [fd8dc9ea20e9d82c07c6a39c2019f314d19dd6403e5fb488d6d39af7f3e079dc] <==
	I0719 19:16:21.126465       1 controller.go:85] Starting OpenAPI controller
	I0719 19:16:21.126506       1 controller.go:85] Starting OpenAPI V3 controller
	I0719 19:16:21.126531       1 naming_controller.go:291] Starting NamingConditionController
	I0719 19:16:21.127036       1 establishing_controller.go:76] Starting EstablishingController
	I0719 19:16:21.127079       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0719 19:16:21.127109       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0719 19:16:21.127189       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0719 19:16:21.224738       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0719 19:16:21.262364       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0719 19:16:21.262852       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0719 19:16:21.282847       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0719 19:16:21.283648       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0719 19:16:21.290774       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0719 19:16:21.292280       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0719 19:16:21.296768       1 cache.go:39] Caches are synced for autoregister controller
	I0719 19:16:21.784948       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0719 19:16:22.088645       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0719 19:16:22.513321       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0719 19:16:22.525129       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0719 19:16:22.561854       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0719 19:16:22.581222       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0719 19:16:22.586619       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0719 19:16:23.080289       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0719 19:16:33.747778       1 controller.go:611] quota admission added evaluator for: endpoints
	I0719 19:16:33.772215       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [2ad6f972dae9c0579d5dcec0b864578658745c3e87d3401a990889ec77273302] <==
	I0719 19:16:33.564513       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0719 19:16:33.565802       1 shared_informer.go:262] Caches are synced for taint
	I0719 19:16:33.565943       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0719 19:16:33.566086       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-364383. Assuming now as a timestamp.
	I0719 19:16:33.566142       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0719 19:16:33.565957       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0719 19:16:33.566467       1 event.go:294] "Event occurred" object="test-preload-364383" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-364383 event: Registered Node test-preload-364383 in Controller"
	I0719 19:16:33.575263       1 shared_informer.go:262] Caches are synced for persistent volume
	I0719 19:16:33.579459       1 shared_informer.go:262] Caches are synced for PVC protection
	I0719 19:16:33.581789       1 shared_informer.go:262] Caches are synced for service account
	I0719 19:16:33.587101       1 shared_informer.go:262] Caches are synced for deployment
	I0719 19:16:33.596231       1 shared_informer.go:262] Caches are synced for cronjob
	I0719 19:16:33.601748       1 shared_informer.go:262] Caches are synced for ephemeral
	I0719 19:16:33.606584       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0719 19:16:33.614583       1 shared_informer.go:262] Caches are synced for GC
	I0719 19:16:33.710228       1 shared_informer.go:262] Caches are synced for disruption
	I0719 19:16:33.710325       1 disruption.go:371] Sending events to api server.
	I0719 19:16:33.739886       1 shared_informer.go:262] Caches are synced for endpoint
	I0719 19:16:33.763612       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0719 19:16:33.779109       1 shared_informer.go:262] Caches are synced for resource quota
	I0719 19:16:33.794039       1 shared_informer.go:262] Caches are synced for resource quota
	I0719 19:16:33.815262       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0719 19:16:34.218538       1 shared_informer.go:262] Caches are synced for garbage collector
	I0719 19:16:34.271228       1 shared_informer.go:262] Caches are synced for garbage collector
	I0719 19:16:34.271314       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [98e5570078c8410f8f8aa1594128d5984b3b1994be00798f3360672c6241aca8] <==
	I0719 19:16:22.998702       1 node.go:163] Successfully retrieved node IP: 192.168.39.194
	I0719 19:16:22.998813       1 server_others.go:138] "Detected node IP" address="192.168.39.194"
	I0719 19:16:22.998844       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0719 19:16:23.067637       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0719 19:16:23.067670       1 server_others.go:206] "Using iptables Proxier"
	I0719 19:16:23.067712       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0719 19:16:23.068762       1 server.go:661] "Version info" version="v1.24.4"
	I0719 19:16:23.068799       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 19:16:23.074824       1 config.go:317] "Starting service config controller"
	I0719 19:16:23.075122       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0719 19:16:23.075239       1 config.go:226] "Starting endpoint slice config controller"
	I0719 19:16:23.075320       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0719 19:16:23.076474       1 config.go:444] "Starting node config controller"
	I0719 19:16:23.076513       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0719 19:16:23.178789       1 shared_informer.go:262] Caches are synced for node config
	I0719 19:16:23.178839       1 shared_informer.go:262] Caches are synced for service config
	I0719 19:16:23.178862       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [cf4fe24fb24c94dc72edd4718621e1421fe48226ceb311076c2a4558b7767c31] <==
	I0719 19:16:17.781972       1 serving.go:348] Generated self-signed cert in-memory
	W0719 19:16:21.162303       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0719 19:16:21.164227       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 19:16:21.164299       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0719 19:16:21.164308       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0719 19:16:21.208288       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0719 19:16:21.208320       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 19:16:21.217404       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0719 19:16:21.217503       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 19:16:21.217421       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0719 19:16:21.217438       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0719 19:16:21.318334       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 19 19:16:21 test-preload-364383 kubelet[1093]: I0719 19:16:21.837670    1093 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c5c6e4a-8df1-479a-b227-c9b4df49b850-config-volume\") pod \"coredns-6d4b75cb6d-28rgv\" (UID: \"0c5c6e4a-8df1-479a-b227-c9b4df49b850\") " pod="kube-system/coredns-6d4b75cb6d-28rgv"
	Jul 19 19:16:21 test-preload-364383 kubelet[1093]: I0719 19:16:21.837773    1093 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq75r\" (UniqueName: \"kubernetes.io/projected/0c5c6e4a-8df1-479a-b227-c9b4df49b850-kube-api-access-xq75r\") pod \"coredns-6d4b75cb6d-28rgv\" (UID: \"0c5c6e4a-8df1-479a-b227-c9b4df49b850\") " pod="kube-system/coredns-6d4b75cb6d-28rgv"
	Jul 19 19:16:21 test-preload-364383 kubelet[1093]: I0719 19:16:21.837874    1093 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ec50d47-f3b7-46c9-b9f1-d60f70400d38-config-volume\") pod \"coredns-6d4b75cb6d-q59pj\" (UID: \"0ec50d47-f3b7-46c9-b9f1-d60f70400d38\") " pod="kube-system/coredns-6d4b75cb6d-q59pj"
	Jul 19 19:16:21 test-preload-364383 kubelet[1093]: I0719 19:16:21.837968    1093 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqhpd\" (UniqueName: \"kubernetes.io/projected/0ec50d47-f3b7-46c9-b9f1-d60f70400d38-kube-api-access-xqhpd\") pod \"coredns-6d4b75cb6d-q59pj\" (UID: \"0ec50d47-f3b7-46c9-b9f1-d60f70400d38\") " pod="kube-system/coredns-6d4b75cb6d-q59pj"
	Jul 19 19:16:21 test-preload-364383 kubelet[1093]: I0719 19:16:21.838067    1093 reconciler.go:159] "Reconciler: start to sync state"
	Jul 19 19:16:21 test-preload-364383 kubelet[1093]: E0719 19:16:21.941692    1093 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 19 19:16:21 test-preload-364383 kubelet[1093]: E0719 19:16:21.941889    1093 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/0c5c6e4a-8df1-479a-b227-c9b4df49b850-config-volume podName:0c5c6e4a-8df1-479a-b227-c9b4df49b850 nodeName:}" failed. No retries permitted until 2024-07-19 19:16:22.441822494 +0000 UTC m=+6.775818403 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0c5c6e4a-8df1-479a-b227-c9b4df49b850-config-volume") pod "coredns-6d4b75cb6d-28rgv" (UID: "0c5c6e4a-8df1-479a-b227-c9b4df49b850") : object "kube-system"/"coredns" not registered
	Jul 19 19:16:21 test-preload-364383 kubelet[1093]: E0719 19:16:21.942050    1093 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 19 19:16:21 test-preload-364383 kubelet[1093]: E0719 19:16:21.942150    1093 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/0ec50d47-f3b7-46c9-b9f1-d60f70400d38-config-volume podName:0ec50d47-f3b7-46c9-b9f1-d60f70400d38 nodeName:}" failed. No retries permitted until 2024-07-19 19:16:22.442137269 +0000 UTC m=+6.776133178 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0ec50d47-f3b7-46c9-b9f1-d60f70400d38-config-volume") pod "coredns-6d4b75cb6d-q59pj" (UID: "0ec50d47-f3b7-46c9-b9f1-d60f70400d38") : object "kube-system"/"coredns" not registered
	Jul 19 19:16:22 test-preload-364383 kubelet[1093]: I0719 19:16:22.042814    1093 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ec50d47-f3b7-46c9-b9f1-d60f70400d38-config-volume\") pod \"0ec50d47-f3b7-46c9-b9f1-d60f70400d38\" (UID: \"0ec50d47-f3b7-46c9-b9f1-d60f70400d38\") "
	Jul 19 19:16:22 test-preload-364383 kubelet[1093]: W0719 19:16:22.044125    1093 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/0ec50d47-f3b7-46c9-b9f1-d60f70400d38/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Jul 19 19:16:22 test-preload-364383 kubelet[1093]: I0719 19:16:22.044970    1093 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ec50d47-f3b7-46c9-b9f1-d60f70400d38-config-volume" (OuterVolumeSpecName: "config-volume") pod "0ec50d47-f3b7-46c9-b9f1-d60f70400d38" (UID: "0ec50d47-f3b7-46c9-b9f1-d60f70400d38"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Jul 19 19:16:22 test-preload-364383 kubelet[1093]: I0719 19:16:22.144746    1093 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ec50d47-f3b7-46c9-b9f1-d60f70400d38-config-volume\") on node \"test-preload-364383\" DevicePath \"\""
	Jul 19 19:16:22 test-preload-364383 kubelet[1093]: I0719 19:16:22.245616    1093 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqhpd\" (UniqueName: \"kubernetes.io/projected/0ec50d47-f3b7-46c9-b9f1-d60f70400d38-kube-api-access-xqhpd\") pod \"0ec50d47-f3b7-46c9-b9f1-d60f70400d38\" (UID: \"0ec50d47-f3b7-46c9-b9f1-d60f70400d38\") "
	Jul 19 19:16:22 test-preload-364383 kubelet[1093]: I0719 19:16:22.251797    1093 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ec50d47-f3b7-46c9-b9f1-d60f70400d38-kube-api-access-xqhpd" (OuterVolumeSpecName: "kube-api-access-xqhpd") pod "0ec50d47-f3b7-46c9-b9f1-d60f70400d38" (UID: "0ec50d47-f3b7-46c9-b9f1-d60f70400d38"). InnerVolumeSpecName "kube-api-access-xqhpd". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 19 19:16:22 test-preload-364383 kubelet[1093]: I0719 19:16:22.347275    1093 reconciler.go:384] "Volume detached for volume \"kube-api-access-xqhpd\" (UniqueName: \"kubernetes.io/projected/0ec50d47-f3b7-46c9-b9f1-d60f70400d38-kube-api-access-xqhpd\") on node \"test-preload-364383\" DevicePath \"\""
	Jul 19 19:16:22 test-preload-364383 kubelet[1093]: E0719 19:16:22.447617    1093 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 19 19:16:22 test-preload-364383 kubelet[1093]: E0719 19:16:22.447694    1093 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/0c5c6e4a-8df1-479a-b227-c9b4df49b850-config-volume podName:0c5c6e4a-8df1-479a-b227-c9b4df49b850 nodeName:}" failed. No retries permitted until 2024-07-19 19:16:23.447679356 +0000 UTC m=+7.781675262 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0c5c6e4a-8df1-479a-b227-c9b4df49b850-config-volume") pod "coredns-6d4b75cb6d-28rgv" (UID: "0c5c6e4a-8df1-479a-b227-c9b4df49b850") : object "kube-system"/"coredns" not registered
	Jul 19 19:16:22 test-preload-364383 kubelet[1093]: E0719 19:16:22.897107    1093 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-28rgv" podUID=0c5c6e4a-8df1-479a-b227-c9b4df49b850
	Jul 19 19:16:23 test-preload-364383 kubelet[1093]: E0719 19:16:23.455916    1093 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 19 19:16:23 test-preload-364383 kubelet[1093]: E0719 19:16:23.455999    1093 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/0c5c6e4a-8df1-479a-b227-c9b4df49b850-config-volume podName:0c5c6e4a-8df1-479a-b227-c9b4df49b850 nodeName:}" failed. No retries permitted until 2024-07-19 19:16:25.455981767 +0000 UTC m=+9.789977662 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0c5c6e4a-8df1-479a-b227-c9b4df49b850-config-volume") pod "coredns-6d4b75cb6d-28rgv" (UID: "0c5c6e4a-8df1-479a-b227-c9b4df49b850") : object "kube-system"/"coredns" not registered
	Jul 19 19:16:24 test-preload-364383 kubelet[1093]: E0719 19:16:24.897288    1093 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-28rgv" podUID=0c5c6e4a-8df1-479a-b227-c9b4df49b850
	Jul 19 19:16:25 test-preload-364383 kubelet[1093]: E0719 19:16:25.474425    1093 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 19 19:16:25 test-preload-364383 kubelet[1093]: E0719 19:16:25.474514    1093 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/0c5c6e4a-8df1-479a-b227-c9b4df49b850-config-volume podName:0c5c6e4a-8df1-479a-b227-c9b4df49b850 nodeName:}" failed. No retries permitted until 2024-07-19 19:16:29.474497126 +0000 UTC m=+13.808493024 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0c5c6e4a-8df1-479a-b227-c9b4df49b850-config-volume") pod "coredns-6d4b75cb6d-28rgv" (UID: "0c5c6e4a-8df1-479a-b227-c9b4df49b850") : object "kube-system"/"coredns" not registered
	Jul 19 19:16:25 test-preload-364383 kubelet[1093]: I0719 19:16:25.906976    1093 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=0ec50d47-f3b7-46c9-b9f1-d60f70400d38 path="/var/lib/kubelet/pods/0ec50d47-f3b7-46c9-b9f1-d60f70400d38/volumes"
	
	
	==> storage-provisioner [0af2e1d240f7e02e875b5b9484b81f02ea3894a0d118fa88f28abedc6f8c1299] <==
	I0719 19:16:23.213151       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-364383 -n test-preload-364383
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-364383 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-364383" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-364383
--- FAIL: TestPreload (179.10s)

                                                
                                    
x
+
TestKubernetesUpgrade (418.49s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-821162 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-821162 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m8.981200125s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-821162] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19307
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19307-14841/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19307-14841/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-821162" primary control-plane node in "kubernetes-upgrade-821162" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 19:18:29.124452   57543 out.go:291] Setting OutFile to fd 1 ...
	I0719 19:18:29.125042   57543 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:18:29.125056   57543 out.go:304] Setting ErrFile to fd 2...
	I0719 19:18:29.125063   57543 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:18:29.125302   57543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 19:18:29.126070   57543 out.go:298] Setting JSON to false
	I0719 19:18:29.127018   57543 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7252,"bootTime":1721409457,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 19:18:29.127072   57543 start.go:139] virtualization: kvm guest
	I0719 19:18:29.129205   57543 out.go:177] * [kubernetes-upgrade-821162] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 19:18:29.134410   57543 notify.go:220] Checking for updates...
	I0719 19:18:29.135321   57543 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 19:18:29.138407   57543 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 19:18:29.140779   57543 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:18:29.143594   57543 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 19:18:29.146315   57543 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 19:18:29.147694   57543 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 19:18:29.149203   57543 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 19:18:29.188202   57543 out.go:177] * Using the kvm2 driver based on user configuration
	I0719 19:18:29.190779   57543 start.go:297] selected driver: kvm2
	I0719 19:18:29.190816   57543 start.go:901] validating driver "kvm2" against <nil>
	I0719 19:18:29.190835   57543 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 19:18:29.191881   57543 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 19:18:29.217349   57543 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19307-14841/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 19:18:29.233928   57543 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 19:18:29.233968   57543 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 19:18:29.234251   57543 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 19:18:29.234325   57543 cni.go:84] Creating CNI manager for ""
	I0719 19:18:29.234342   57543 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:18:29.234355   57543 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 19:18:29.234432   57543 start.go:340] cluster config:
	{Name:kubernetes-upgrade-821162 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-821162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:18:29.234564   57543 iso.go:125] acquiring lock: {Name:mka126a3710ab8a115eb2a3299b735524430e5f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 19:18:29.236491   57543 out.go:177] * Starting "kubernetes-upgrade-821162" primary control-plane node in "kubernetes-upgrade-821162" cluster
	I0719 19:18:29.237855   57543 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 19:18:29.237900   57543 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0719 19:18:29.237910   57543 cache.go:56] Caching tarball of preloaded images
	I0719 19:18:29.237991   57543 preload.go:172] Found /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 19:18:29.238004   57543 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0719 19:18:29.238356   57543 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kubernetes-upgrade-821162/config.json ...
	I0719 19:18:29.238388   57543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kubernetes-upgrade-821162/config.json: {Name:mk9c958a37f49bb7370b3f1e2fb1dc5f67e5ee9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:18:29.238534   57543 start.go:360] acquireMachinesLock for kubernetes-upgrade-821162: {Name:mk45aeee801587237bb965fa260501e9fac6f233 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 19:19:12.224238   57543 start.go:364] duration metric: took 42.985680663s to acquireMachinesLock for "kubernetes-upgrade-821162"
	I0719 19:19:12.224310   57543 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-821162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-821162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 19:19:12.224405   57543 start.go:125] createHost starting for "" (driver="kvm2")
	I0719 19:19:12.226730   57543 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 19:19:12.226906   57543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 19:19:12.226956   57543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:19:12.244627   57543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44109
	I0719 19:19:12.245008   57543 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:19:12.245602   57543 main.go:141] libmachine: Using API Version  1
	I0719 19:19:12.245626   57543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:19:12.246108   57543 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:19:12.246287   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetMachineName
	I0719 19:19:12.246483   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .DriverName
	I0719 19:19:12.246653   57543 start.go:159] libmachine.API.Create for "kubernetes-upgrade-821162" (driver="kvm2")
	I0719 19:19:12.246688   57543 client.go:168] LocalClient.Create starting
	I0719 19:19:12.246730   57543 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem
	I0719 19:19:12.246768   57543 main.go:141] libmachine: Decoding PEM data...
	I0719 19:19:12.246790   57543 main.go:141] libmachine: Parsing certificate...
	I0719 19:19:12.246858   57543 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem
	I0719 19:19:12.246885   57543 main.go:141] libmachine: Decoding PEM data...
	I0719 19:19:12.246902   57543 main.go:141] libmachine: Parsing certificate...
	I0719 19:19:12.246927   57543 main.go:141] libmachine: Running pre-create checks...
	I0719 19:19:12.246946   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .PreCreateCheck
	I0719 19:19:12.247384   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetConfigRaw
	I0719 19:19:12.247807   57543 main.go:141] libmachine: Creating machine...
	I0719 19:19:12.247825   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .Create
	I0719 19:19:12.247963   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Creating KVM machine...
	I0719 19:19:12.249010   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | found existing default KVM network
	I0719 19:19:12.251009   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | I0719 19:19:12.250830   58211 network.go:209] skipping subnet 192.168.39.0/24 that is reserved: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0719 19:19:12.251547   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | I0719 19:19:12.251456   58211 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:7e:f4:71} reservation:<nil>}
	I0719 19:19:12.252374   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | I0719 19:19:12.252286   58211 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000112eb0}
	I0719 19:19:12.252408   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | created network xml: 
	I0719 19:19:12.252437   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | <network>
	I0719 19:19:12.252449   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG |   <name>mk-kubernetes-upgrade-821162</name>
	I0719 19:19:12.252462   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG |   <dns enable='no'/>
	I0719 19:19:12.252471   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG |   
	I0719 19:19:12.252488   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0719 19:19:12.252501   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG |     <dhcp>
	I0719 19:19:12.252513   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0719 19:19:12.252520   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG |     </dhcp>
	I0719 19:19:12.252535   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG |   </ip>
	I0719 19:19:12.252542   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG |   
	I0719 19:19:12.252554   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | </network>
	I0719 19:19:12.252564   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | 
	I0719 19:19:12.258000   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | trying to create private KVM network mk-kubernetes-upgrade-821162 192.168.61.0/24...
	I0719 19:19:12.322549   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | private KVM network mk-kubernetes-upgrade-821162 192.168.61.0/24 created
	I0719 19:19:12.322588   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | I0719 19:19:12.322516   58211 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 19:19:12.322602   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Setting up store path in /home/jenkins/minikube-integration/19307-14841/.minikube/machines/kubernetes-upgrade-821162 ...
	I0719 19:19:12.322617   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Building disk image from file:///home/jenkins/minikube-integration/19307-14841/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 19:19:12.322816   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Downloading /home/jenkins/minikube-integration/19307-14841/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19307-14841/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 19:19:12.579477   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | I0719 19:19:12.579340   58211 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/kubernetes-upgrade-821162/id_rsa...
	I0719 19:19:12.738253   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | I0719 19:19:12.738148   58211 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/kubernetes-upgrade-821162/kubernetes-upgrade-821162.rawdisk...
	I0719 19:19:12.738284   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | Writing magic tar header
	I0719 19:19:12.738310   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | Writing SSH key tar header
	I0719 19:19:12.738328   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | I0719 19:19:12.738280   58211 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19307-14841/.minikube/machines/kubernetes-upgrade-821162 ...
	I0719 19:19:12.738429   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/kubernetes-upgrade-821162
	I0719 19:19:12.738460   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841/.minikube/machines
	I0719 19:19:12.738472   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841/.minikube/machines/kubernetes-upgrade-821162 (perms=drwx------)
	I0719 19:19:12.738485   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 19:19:12.738494   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841
	I0719 19:19:12.738503   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0719 19:19:12.738513   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | Checking permissions on dir: /home/jenkins
	I0719 19:19:12.738525   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | Checking permissions on dir: /home
	I0719 19:19:12.738541   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841/.minikube/machines (perms=drwxr-xr-x)
	I0719 19:19:12.738554   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841/.minikube (perms=drwxr-xr-x)
	I0719 19:19:12.738562   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | Skipping /home - not owner
	I0719 19:19:12.738586   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841 (perms=drwxrwxr-x)
	I0719 19:19:12.738607   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0719 19:19:12.738622   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0719 19:19:12.738633   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Creating domain...
	I0719 19:19:12.739821   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) define libvirt domain using xml: 
	I0719 19:19:12.739851   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) <domain type='kvm'>
	I0719 19:19:12.739862   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)   <name>kubernetes-upgrade-821162</name>
	I0719 19:19:12.739875   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)   <memory unit='MiB'>2200</memory>
	I0719 19:19:12.739887   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)   <vcpu>2</vcpu>
	I0719 19:19:12.739897   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)   <features>
	I0719 19:19:12.739905   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)     <acpi/>
	I0719 19:19:12.739922   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)     <apic/>
	I0719 19:19:12.739936   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)     <pae/>
	I0719 19:19:12.739947   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)     
	I0719 19:19:12.739956   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)   </features>
	I0719 19:19:12.739969   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)   <cpu mode='host-passthrough'>
	I0719 19:19:12.739981   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)   
	I0719 19:19:12.739988   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)   </cpu>
	I0719 19:19:12.739997   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)   <os>
	I0719 19:19:12.740009   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)     <type>hvm</type>
	I0719 19:19:12.740022   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)     <boot dev='cdrom'/>
	I0719 19:19:12.740033   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)     <boot dev='hd'/>
	I0719 19:19:12.740061   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)     <bootmenu enable='no'/>
	I0719 19:19:12.740086   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)   </os>
	I0719 19:19:12.740099   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)   <devices>
	I0719 19:19:12.740109   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)     <disk type='file' device='cdrom'>
	I0719 19:19:12.740131   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)       <source file='/home/jenkins/minikube-integration/19307-14841/.minikube/machines/kubernetes-upgrade-821162/boot2docker.iso'/>
	I0719 19:19:12.740143   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)       <target dev='hdc' bus='scsi'/>
	I0719 19:19:12.740158   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)       <readonly/>
	I0719 19:19:12.740170   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)     </disk>
	I0719 19:19:12.740180   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)     <disk type='file' device='disk'>
	I0719 19:19:12.740190   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0719 19:19:12.740205   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)       <source file='/home/jenkins/minikube-integration/19307-14841/.minikube/machines/kubernetes-upgrade-821162/kubernetes-upgrade-821162.rawdisk'/>
	I0719 19:19:12.740217   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)       <target dev='hda' bus='virtio'/>
	I0719 19:19:12.740225   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)     </disk>
	I0719 19:19:12.740234   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)     <interface type='network'>
	I0719 19:19:12.740260   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)       <source network='mk-kubernetes-upgrade-821162'/>
	I0719 19:19:12.740279   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)       <model type='virtio'/>
	I0719 19:19:12.740288   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)     </interface>
	I0719 19:19:12.740297   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)     <interface type='network'>
	I0719 19:19:12.740303   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)       <source network='default'/>
	I0719 19:19:12.740307   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)       <model type='virtio'/>
	I0719 19:19:12.740315   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)     </interface>
	I0719 19:19:12.740325   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)     <serial type='pty'>
	I0719 19:19:12.740338   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)       <target port='0'/>
	I0719 19:19:12.740352   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)     </serial>
	I0719 19:19:12.740377   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)     <console type='pty'>
	I0719 19:19:12.740410   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)       <target type='serial' port='0'/>
	I0719 19:19:12.740425   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)     </console>
	I0719 19:19:12.740437   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)     <rng model='virtio'>
	I0719 19:19:12.740451   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)       <backend model='random'>/dev/random</backend>
	I0719 19:19:12.740462   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)     </rng>
	I0719 19:19:12.740473   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)     
	I0719 19:19:12.740484   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)     
	I0719 19:19:12.740497   57543 main.go:141] libmachine: (kubernetes-upgrade-821162)   </devices>
	I0719 19:19:12.740507   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) </domain>
	I0719 19:19:12.740518   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) 
	I0719 19:19:12.744661   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined MAC address 52:54:00:e6:15:c2 in network default
	I0719 19:19:12.745240   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Ensuring networks are active...
	I0719 19:19:12.745257   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:12.746028   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Ensuring network default is active
	I0719 19:19:12.746502   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Ensuring network mk-kubernetes-upgrade-821162 is active
	I0719 19:19:12.747027   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Getting domain xml...
	I0719 19:19:12.747687   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Creating domain...
	I0719 19:19:14.027130   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Waiting to get IP...
	I0719 19:19:14.028063   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:14.028455   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | unable to find current IP address of domain kubernetes-upgrade-821162 in network mk-kubernetes-upgrade-821162
	I0719 19:19:14.028484   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | I0719 19:19:14.028448   58211 retry.go:31] will retry after 218.272613ms: waiting for machine to come up
	I0719 19:19:14.248002   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:14.248576   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | unable to find current IP address of domain kubernetes-upgrade-821162 in network mk-kubernetes-upgrade-821162
	I0719 19:19:14.248605   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | I0719 19:19:14.248522   58211 retry.go:31] will retry after 299.628282ms: waiting for machine to come up
	I0719 19:19:14.550093   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:14.550655   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | unable to find current IP address of domain kubernetes-upgrade-821162 in network mk-kubernetes-upgrade-821162
	I0719 19:19:14.550682   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | I0719 19:19:14.550615   58211 retry.go:31] will retry after 350.638688ms: waiting for machine to come up
	I0719 19:19:14.903076   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:14.903542   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | unable to find current IP address of domain kubernetes-upgrade-821162 in network mk-kubernetes-upgrade-821162
	I0719 19:19:14.903585   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | I0719 19:19:14.903510   58211 retry.go:31] will retry after 501.853242ms: waiting for machine to come up
	I0719 19:19:15.407237   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:15.407721   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | unable to find current IP address of domain kubernetes-upgrade-821162 in network mk-kubernetes-upgrade-821162
	I0719 19:19:15.407748   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | I0719 19:19:15.407666   58211 retry.go:31] will retry after 744.371467ms: waiting for machine to come up
	I0719 19:19:16.153672   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:16.154255   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | unable to find current IP address of domain kubernetes-upgrade-821162 in network mk-kubernetes-upgrade-821162
	I0719 19:19:16.154326   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | I0719 19:19:16.154242   58211 retry.go:31] will retry after 582.920071ms: waiting for machine to come up
	I0719 19:19:16.739221   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:16.739790   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | unable to find current IP address of domain kubernetes-upgrade-821162 in network mk-kubernetes-upgrade-821162
	I0719 19:19:16.739821   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | I0719 19:19:16.739733   58211 retry.go:31] will retry after 978.446876ms: waiting for machine to come up
	I0719 19:19:17.720261   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:17.720687   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | unable to find current IP address of domain kubernetes-upgrade-821162 in network mk-kubernetes-upgrade-821162
	I0719 19:19:17.720713   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | I0719 19:19:17.720654   58211 retry.go:31] will retry after 1.447967042s: waiting for machine to come up
	I0719 19:19:19.170471   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:19.170920   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | unable to find current IP address of domain kubernetes-upgrade-821162 in network mk-kubernetes-upgrade-821162
	I0719 19:19:19.170965   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | I0719 19:19:19.170897   58211 retry.go:31] will retry after 1.591605599s: waiting for machine to come up
	I0719 19:19:20.764700   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:20.765190   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | unable to find current IP address of domain kubernetes-upgrade-821162 in network mk-kubernetes-upgrade-821162
	I0719 19:19:20.765216   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | I0719 19:19:20.765135   58211 retry.go:31] will retry after 1.938898538s: waiting for machine to come up
	I0719 19:19:22.705637   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:22.706098   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | unable to find current IP address of domain kubernetes-upgrade-821162 in network mk-kubernetes-upgrade-821162
	I0719 19:19:22.706119   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | I0719 19:19:22.706077   58211 retry.go:31] will retry after 2.805252821s: waiting for machine to come up
	I0719 19:19:25.512800   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:25.513308   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | unable to find current IP address of domain kubernetes-upgrade-821162 in network mk-kubernetes-upgrade-821162
	I0719 19:19:25.513352   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | I0719 19:19:25.513209   58211 retry.go:31] will retry after 3.376172218s: waiting for machine to come up
	I0719 19:19:28.891999   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:28.892366   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | unable to find current IP address of domain kubernetes-upgrade-821162 in network mk-kubernetes-upgrade-821162
	I0719 19:19:28.892387   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | I0719 19:19:28.892330   58211 retry.go:31] will retry after 3.757536179s: waiting for machine to come up
	I0719 19:19:32.650914   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:32.651424   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Found IP for machine: 192.168.61.58
	I0719 19:19:32.651449   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Reserving static IP address...
	I0719 19:19:32.651478   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has current primary IP address 192.168.61.58 and MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:32.651850   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-821162", mac: "52:54:00:2d:93:d8", ip: "192.168.61.58"} in network mk-kubernetes-upgrade-821162
	I0719 19:19:32.723629   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | Getting to WaitForSSH function...
	I0719 19:19:32.723659   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Reserved static IP address: 192.168.61.58
	I0719 19:19:32.723689   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Waiting for SSH to be available...
	I0719 19:19:32.725985   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:32.726436   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:93:d8", ip: ""} in network mk-kubernetes-upgrade-821162: {Iface:virbr1 ExpiryTime:2024-07-19 20:19:26 +0000 UTC Type:0 Mac:52:54:00:2d:93:d8 Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2d:93:d8}
	I0719 19:19:32.726459   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined IP address 192.168.61.58 and MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:32.726607   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | Using SSH client type: external
	I0719 19:19:32.726639   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | Using SSH private key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/kubernetes-upgrade-821162/id_rsa (-rw-------)
	I0719 19:19:32.726682   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19307-14841/.minikube/machines/kubernetes-upgrade-821162/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 19:19:32.726697   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | About to run SSH command:
	I0719 19:19:32.726710   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | exit 0
	I0719 19:19:32.851938   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | SSH cmd err, output: <nil>: 
	I0719 19:19:32.852181   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) KVM machine creation complete!
	I0719 19:19:32.852541   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetConfigRaw
	I0719 19:19:32.853084   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .DriverName
	I0719 19:19:32.853291   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .DriverName
	I0719 19:19:32.853484   57543 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0719 19:19:32.853502   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetState
	I0719 19:19:32.854768   57543 main.go:141] libmachine: Detecting operating system of created instance...
	I0719 19:19:32.854781   57543 main.go:141] libmachine: Waiting for SSH to be available...
	I0719 19:19:32.854786   57543 main.go:141] libmachine: Getting to WaitForSSH function...
	I0719 19:19:32.854792   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHHostname
	I0719 19:19:32.857266   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:32.857678   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:93:d8", ip: ""} in network mk-kubernetes-upgrade-821162: {Iface:virbr1 ExpiryTime:2024-07-19 20:19:26 +0000 UTC Type:0 Mac:52:54:00:2d:93:d8 Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:kubernetes-upgrade-821162 Clientid:01:52:54:00:2d:93:d8}
	I0719 19:19:32.857706   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined IP address 192.168.61.58 and MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:32.857865   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHPort
	I0719 19:19:32.858023   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHKeyPath
	I0719 19:19:32.858182   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHKeyPath
	I0719 19:19:32.858326   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHUsername
	I0719 19:19:32.858461   57543 main.go:141] libmachine: Using SSH client type: native
	I0719 19:19:32.858657   57543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I0719 19:19:32.858671   57543 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0719 19:19:32.966858   57543 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:19:32.966879   57543 main.go:141] libmachine: Detecting the provisioner...
	I0719 19:19:32.966886   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHHostname
	I0719 19:19:32.969773   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:32.970214   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:93:d8", ip: ""} in network mk-kubernetes-upgrade-821162: {Iface:virbr1 ExpiryTime:2024-07-19 20:19:26 +0000 UTC Type:0 Mac:52:54:00:2d:93:d8 Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:kubernetes-upgrade-821162 Clientid:01:52:54:00:2d:93:d8}
	I0719 19:19:32.970314   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHPort
	I0719 19:19:32.970366   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined IP address 192.168.61.58 and MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:32.970568   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHKeyPath
	I0719 19:19:32.970734   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHKeyPath
	I0719 19:19:32.970907   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHUsername
	I0719 19:19:32.971083   57543 main.go:141] libmachine: Using SSH client type: native
	I0719 19:19:32.971288   57543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I0719 19:19:32.971304   57543 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0719 19:19:33.076009   57543 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0719 19:19:33.076082   57543 main.go:141] libmachine: found compatible host: buildroot
	I0719 19:19:33.076096   57543 main.go:141] libmachine: Provisioning with buildroot...
	I0719 19:19:33.076106   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetMachineName
	I0719 19:19:33.076339   57543 buildroot.go:166] provisioning hostname "kubernetes-upgrade-821162"
	I0719 19:19:33.076359   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetMachineName
	I0719 19:19:33.076572   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHHostname
	I0719 19:19:33.079118   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:33.079537   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:93:d8", ip: ""} in network mk-kubernetes-upgrade-821162: {Iface:virbr1 ExpiryTime:2024-07-19 20:19:26 +0000 UTC Type:0 Mac:52:54:00:2d:93:d8 Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:kubernetes-upgrade-821162 Clientid:01:52:54:00:2d:93:d8}
	I0719 19:19:33.079563   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined IP address 192.168.61.58 and MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:33.079717   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHPort
	I0719 19:19:33.079971   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHKeyPath
	I0719 19:19:33.080146   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHKeyPath
	I0719 19:19:33.080290   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHUsername
	I0719 19:19:33.080441   57543 main.go:141] libmachine: Using SSH client type: native
	I0719 19:19:33.080619   57543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I0719 19:19:33.080632   57543 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-821162 && echo "kubernetes-upgrade-821162" | sudo tee /etc/hostname
	I0719 19:19:33.198512   57543 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-821162
	
	I0719 19:19:33.198540   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHHostname
	I0719 19:19:33.201203   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:33.201510   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:93:d8", ip: ""} in network mk-kubernetes-upgrade-821162: {Iface:virbr1 ExpiryTime:2024-07-19 20:19:26 +0000 UTC Type:0 Mac:52:54:00:2d:93:d8 Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:kubernetes-upgrade-821162 Clientid:01:52:54:00:2d:93:d8}
	I0719 19:19:33.201533   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined IP address 192.168.61.58 and MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:33.201722   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHPort
	I0719 19:19:33.201956   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHKeyPath
	I0719 19:19:33.202173   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHKeyPath
	I0719 19:19:33.202348   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHUsername
	I0719 19:19:33.202537   57543 main.go:141] libmachine: Using SSH client type: native
	I0719 19:19:33.202708   57543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I0719 19:19:33.202725   57543 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-821162' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-821162/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-821162' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 19:19:33.321315   57543 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:19:33.321341   57543 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 19:19:33.321384   57543 buildroot.go:174] setting up certificates
	I0719 19:19:33.321398   57543 provision.go:84] configureAuth start
	I0719 19:19:33.321408   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetMachineName
	I0719 19:19:33.321703   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetIP
	I0719 19:19:33.324864   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:33.325266   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:93:d8", ip: ""} in network mk-kubernetes-upgrade-821162: {Iface:virbr1 ExpiryTime:2024-07-19 20:19:26 +0000 UTC Type:0 Mac:52:54:00:2d:93:d8 Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:kubernetes-upgrade-821162 Clientid:01:52:54:00:2d:93:d8}
	I0719 19:19:33.325294   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined IP address 192.168.61.58 and MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:33.325503   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHHostname
	I0719 19:19:33.327954   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:33.328300   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:93:d8", ip: ""} in network mk-kubernetes-upgrade-821162: {Iface:virbr1 ExpiryTime:2024-07-19 20:19:26 +0000 UTC Type:0 Mac:52:54:00:2d:93:d8 Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:kubernetes-upgrade-821162 Clientid:01:52:54:00:2d:93:d8}
	I0719 19:19:33.328322   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined IP address 192.168.61.58 and MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:33.328461   57543 provision.go:143] copyHostCerts
	I0719 19:19:33.328537   57543 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 19:19:33.328547   57543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 19:19:33.328599   57543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 19:19:33.328697   57543 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 19:19:33.328706   57543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 19:19:33.328725   57543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 19:19:33.328786   57543 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 19:19:33.328793   57543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 19:19:33.328811   57543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 19:19:33.328871   57543 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-821162 san=[127.0.0.1 192.168.61.58 kubernetes-upgrade-821162 localhost minikube]
	I0719 19:19:33.543092   57543 provision.go:177] copyRemoteCerts
	I0719 19:19:33.543173   57543 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 19:19:33.543198   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHHostname
	I0719 19:19:33.546389   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:33.546740   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:93:d8", ip: ""} in network mk-kubernetes-upgrade-821162: {Iface:virbr1 ExpiryTime:2024-07-19 20:19:26 +0000 UTC Type:0 Mac:52:54:00:2d:93:d8 Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:kubernetes-upgrade-821162 Clientid:01:52:54:00:2d:93:d8}
	I0719 19:19:33.546781   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined IP address 192.168.61.58 and MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:33.546928   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHPort
	I0719 19:19:33.547126   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHKeyPath
	I0719 19:19:33.547348   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHUsername
	I0719 19:19:33.547517   57543 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/kubernetes-upgrade-821162/id_rsa Username:docker}
	I0719 19:19:33.631472   57543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 19:19:33.654098   57543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0719 19:19:33.675143   57543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 19:19:33.695473   57543 provision.go:87] duration metric: took 374.063648ms to configureAuth
	I0719 19:19:33.695501   57543 buildroot.go:189] setting minikube options for container-runtime
	I0719 19:19:33.695695   57543 config.go:182] Loaded profile config "kubernetes-upgrade-821162": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0719 19:19:33.695784   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHHostname
	I0719 19:19:33.698487   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:33.698753   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:93:d8", ip: ""} in network mk-kubernetes-upgrade-821162: {Iface:virbr1 ExpiryTime:2024-07-19 20:19:26 +0000 UTC Type:0 Mac:52:54:00:2d:93:d8 Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:kubernetes-upgrade-821162 Clientid:01:52:54:00:2d:93:d8}
	I0719 19:19:33.698782   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined IP address 192.168.61.58 and MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:33.698856   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHPort
	I0719 19:19:33.699060   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHKeyPath
	I0719 19:19:33.699216   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHKeyPath
	I0719 19:19:33.699408   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHUsername
	I0719 19:19:33.699556   57543 main.go:141] libmachine: Using SSH client type: native
	I0719 19:19:33.699761   57543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I0719 19:19:33.699782   57543 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 19:19:33.960651   57543 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 19:19:33.960684   57543 main.go:141] libmachine: Checking connection to Docker...
	I0719 19:19:33.960695   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetURL
	I0719 19:19:33.962021   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | Using libvirt version 6000000
	I0719 19:19:33.964127   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:33.964424   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:93:d8", ip: ""} in network mk-kubernetes-upgrade-821162: {Iface:virbr1 ExpiryTime:2024-07-19 20:19:26 +0000 UTC Type:0 Mac:52:54:00:2d:93:d8 Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:kubernetes-upgrade-821162 Clientid:01:52:54:00:2d:93:d8}
	I0719 19:19:33.964451   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined IP address 192.168.61.58 and MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:33.964605   57543 main.go:141] libmachine: Docker is up and running!
	I0719 19:19:33.964633   57543 main.go:141] libmachine: Reticulating splines...
	I0719 19:19:33.964640   57543 client.go:171] duration metric: took 21.717941047s to LocalClient.Create
	I0719 19:19:33.964660   57543 start.go:167] duration metric: took 21.718017732s to libmachine.API.Create "kubernetes-upgrade-821162"
	I0719 19:19:33.964669   57543 start.go:293] postStartSetup for "kubernetes-upgrade-821162" (driver="kvm2")
	I0719 19:19:33.964681   57543 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 19:19:33.964703   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .DriverName
	I0719 19:19:33.964918   57543 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 19:19:33.964944   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHHostname
	I0719 19:19:33.966977   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:33.967254   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:93:d8", ip: ""} in network mk-kubernetes-upgrade-821162: {Iface:virbr1 ExpiryTime:2024-07-19 20:19:26 +0000 UTC Type:0 Mac:52:54:00:2d:93:d8 Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:kubernetes-upgrade-821162 Clientid:01:52:54:00:2d:93:d8}
	I0719 19:19:33.967292   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined IP address 192.168.61.58 and MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:33.967389   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHPort
	I0719 19:19:33.967562   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHKeyPath
	I0719 19:19:33.967704   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHUsername
	I0719 19:19:33.967826   57543 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/kubernetes-upgrade-821162/id_rsa Username:docker}
	I0719 19:19:34.049646   57543 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 19:19:34.053806   57543 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 19:19:34.053834   57543 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 19:19:34.053903   57543 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 19:19:34.053988   57543 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 19:19:34.054072   57543 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 19:19:34.063490   57543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:19:34.085775   57543 start.go:296] duration metric: took 121.092182ms for postStartSetup
	I0719 19:19:34.085835   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetConfigRaw
	I0719 19:19:34.086433   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetIP
	I0719 19:19:34.088940   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:34.089286   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:93:d8", ip: ""} in network mk-kubernetes-upgrade-821162: {Iface:virbr1 ExpiryTime:2024-07-19 20:19:26 +0000 UTC Type:0 Mac:52:54:00:2d:93:d8 Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:kubernetes-upgrade-821162 Clientid:01:52:54:00:2d:93:d8}
	I0719 19:19:34.089313   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined IP address 192.168.61.58 and MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:34.089685   57543 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kubernetes-upgrade-821162/config.json ...
	I0719 19:19:34.089861   57543 start.go:128] duration metric: took 21.865445066s to createHost
	I0719 19:19:34.089882   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHHostname
	I0719 19:19:34.092328   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:34.092669   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:93:d8", ip: ""} in network mk-kubernetes-upgrade-821162: {Iface:virbr1 ExpiryTime:2024-07-19 20:19:26 +0000 UTC Type:0 Mac:52:54:00:2d:93:d8 Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:kubernetes-upgrade-821162 Clientid:01:52:54:00:2d:93:d8}
	I0719 19:19:34.092696   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined IP address 192.168.61.58 and MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:34.092845   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHPort
	I0719 19:19:34.093027   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHKeyPath
	I0719 19:19:34.093249   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHKeyPath
	I0719 19:19:34.093398   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHUsername
	I0719 19:19:34.093606   57543 main.go:141] libmachine: Using SSH client type: native
	I0719 19:19:34.093778   57543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I0719 19:19:34.093791   57543 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0719 19:19:34.200349   57543 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721416774.172372870
	
	I0719 19:19:34.200369   57543 fix.go:216] guest clock: 1721416774.172372870
	I0719 19:19:34.200375   57543 fix.go:229] Guest: 2024-07-19 19:19:34.17237287 +0000 UTC Remote: 2024-07-19 19:19:34.089871468 +0000 UTC m=+65.015786114 (delta=82.501402ms)
	I0719 19:19:34.200403   57543 fix.go:200] guest clock delta is within tolerance: 82.501402ms
	I0719 19:19:34.200407   57543 start.go:83] releasing machines lock for "kubernetes-upgrade-821162", held for 21.976137955s
	I0719 19:19:34.200428   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .DriverName
	I0719 19:19:34.200727   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetIP
	I0719 19:19:34.203513   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:34.203857   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:93:d8", ip: ""} in network mk-kubernetes-upgrade-821162: {Iface:virbr1 ExpiryTime:2024-07-19 20:19:26 +0000 UTC Type:0 Mac:52:54:00:2d:93:d8 Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:kubernetes-upgrade-821162 Clientid:01:52:54:00:2d:93:d8}
	I0719 19:19:34.203886   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined IP address 192.168.61.58 and MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:34.204013   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .DriverName
	I0719 19:19:34.204480   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .DriverName
	I0719 19:19:34.204677   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .DriverName
	I0719 19:19:34.204764   57543 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 19:19:34.204816   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHHostname
	I0719 19:19:34.204880   57543 ssh_runner.go:195] Run: cat /version.json
	I0719 19:19:34.204906   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHHostname
	I0719 19:19:34.207359   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:34.207643   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:93:d8", ip: ""} in network mk-kubernetes-upgrade-821162: {Iface:virbr1 ExpiryTime:2024-07-19 20:19:26 +0000 UTC Type:0 Mac:52:54:00:2d:93:d8 Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:kubernetes-upgrade-821162 Clientid:01:52:54:00:2d:93:d8}
	I0719 19:19:34.207661   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined IP address 192.168.61.58 and MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:34.207728   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:34.207765   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHPort
	I0719 19:19:34.207919   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHKeyPath
	I0719 19:19:34.208067   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHUsername
	I0719 19:19:34.208126   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:93:d8", ip: ""} in network mk-kubernetes-upgrade-821162: {Iface:virbr1 ExpiryTime:2024-07-19 20:19:26 +0000 UTC Type:0 Mac:52:54:00:2d:93:d8 Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:kubernetes-upgrade-821162 Clientid:01:52:54:00:2d:93:d8}
	I0719 19:19:34.208146   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined IP address 192.168.61.58 and MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:34.208187   57543 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/kubernetes-upgrade-821162/id_rsa Username:docker}
	I0719 19:19:34.208314   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHPort
	I0719 19:19:34.208431   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHKeyPath
	I0719 19:19:34.208600   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHUsername
	I0719 19:19:34.208765   57543 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/kubernetes-upgrade-821162/id_rsa Username:docker}
	I0719 19:19:34.327039   57543 ssh_runner.go:195] Run: systemctl --version
	I0719 19:19:34.333270   57543 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 19:19:34.497128   57543 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 19:19:34.502527   57543 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 19:19:34.502598   57543 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 19:19:34.518178   57543 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 19:19:34.518208   57543 start.go:495] detecting cgroup driver to use...
	I0719 19:19:34.518269   57543 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 19:19:34.534379   57543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 19:19:34.550228   57543 docker.go:217] disabling cri-docker service (if available) ...
	I0719 19:19:34.550291   57543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 19:19:34.564069   57543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 19:19:34.579094   57543 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 19:19:34.713756   57543 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 19:19:34.860247   57543 docker.go:233] disabling docker service ...
	I0719 19:19:34.860313   57543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 19:19:34.876051   57543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 19:19:34.888956   57543 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 19:19:35.023172   57543 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 19:19:35.144543   57543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 19:19:35.159877   57543 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 19:19:35.178801   57543 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0719 19:19:35.178859   57543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:19:35.188751   57543 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 19:19:35.188813   57543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:19:35.198689   57543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:19:35.208940   57543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:19:35.218508   57543 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 19:19:35.228121   57543 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 19:19:35.236689   57543 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 19:19:35.236733   57543 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 19:19:35.248848   57543 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 19:19:35.257697   57543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:19:35.385914   57543 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 19:19:35.532216   57543 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 19:19:35.532290   57543 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 19:19:35.536709   57543 start.go:563] Will wait 60s for crictl version
	I0719 19:19:35.536754   57543 ssh_runner.go:195] Run: which crictl
	I0719 19:19:35.540219   57543 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 19:19:35.576833   57543 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 19:19:35.576915   57543 ssh_runner.go:195] Run: crio --version
	I0719 19:19:35.604493   57543 ssh_runner.go:195] Run: crio --version
	I0719 19:19:35.633584   57543 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0719 19:19:35.634726   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetIP
	I0719 19:19:35.637716   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:35.638129   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:93:d8", ip: ""} in network mk-kubernetes-upgrade-821162: {Iface:virbr1 ExpiryTime:2024-07-19 20:19:26 +0000 UTC Type:0 Mac:52:54:00:2d:93:d8 Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:kubernetes-upgrade-821162 Clientid:01:52:54:00:2d:93:d8}
	I0719 19:19:35.638156   57543 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined IP address 192.168.61.58 and MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:19:35.638349   57543 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0719 19:19:35.642264   57543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:19:35.656080   57543 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-821162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-821162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.58 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 19:19:35.656213   57543 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 19:19:35.656274   57543 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:19:35.695587   57543 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 19:19:35.695694   57543 ssh_runner.go:195] Run: which lz4
	I0719 19:19:35.699827   57543 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0719 19:19:35.703706   57543 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 19:19:35.703735   57543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0719 19:19:37.179403   57543 crio.go:462] duration metric: took 1.479608178s to copy over tarball
	I0719 19:19:37.179465   57543 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 19:19:39.843008   57543 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.663507894s)
	I0719 19:19:39.843036   57543 crio.go:469] duration metric: took 2.66360926s to extract the tarball
	I0719 19:19:39.843045   57543 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 19:19:39.884383   57543 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:19:39.929402   57543 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 19:19:39.929438   57543 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 19:19:39.929509   57543 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:19:39.929526   57543 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0719 19:19:39.929536   57543 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:19:39.929506   57543 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:19:39.929602   57543 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0719 19:19:39.929649   57543 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:19:39.929652   57543 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0719 19:19:39.929559   57543 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:19:39.931344   57543 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:19:39.931358   57543 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:19:39.931366   57543 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:19:39.931361   57543 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:19:39.931344   57543 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0719 19:19:39.931425   57543 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0719 19:19:39.931465   57543 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:19:39.931682   57543 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0719 19:19:40.199658   57543 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0719 19:19:40.211037   57543 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0719 19:19:40.217926   57543 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:19:40.220237   57543 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0719 19:19:40.224577   57543 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:19:40.284300   57543 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:19:40.288489   57543 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0719 19:19:40.288541   57543 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0719 19:19:40.288591   57543 ssh_runner.go:195] Run: which crictl
	I0719 19:19:40.294496   57543 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:19:40.341506   57543 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0719 19:19:40.341551   57543 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0719 19:19:40.341599   57543 ssh_runner.go:195] Run: which crictl
	I0719 19:19:40.356087   57543 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0719 19:19:40.356129   57543 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:19:40.356136   57543 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0719 19:19:40.356171   57543 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0719 19:19:40.356179   57543 ssh_runner.go:195] Run: which crictl
	I0719 19:19:40.356215   57543 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0719 19:19:40.356247   57543 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:19:40.356294   57543 ssh_runner.go:195] Run: which crictl
	I0719 19:19:40.356222   57543 ssh_runner.go:195] Run: which crictl
	I0719 19:19:40.397266   57543 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0719 19:19:40.397312   57543 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:19:40.397348   57543 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0719 19:19:40.397355   57543 ssh_runner.go:195] Run: which crictl
	I0719 19:19:40.399985   57543 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0719 19:19:40.400051   57543 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:19:40.400068   57543 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0719 19:19:40.400136   57543 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:19:40.400171   57543 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0719 19:19:40.400207   57543 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:19:40.400241   57543 ssh_runner.go:195] Run: which crictl
	I0719 19:19:40.524401   57543 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0719 19:19:40.524517   57543 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:19:40.535406   57543 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0719 19:19:40.535501   57543 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0719 19:19:40.535525   57543 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0719 19:19:40.535570   57543 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0719 19:19:40.535599   57543 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:19:40.567685   57543 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0719 19:19:40.575561   57543 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0719 19:19:41.282673   57543 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:19:41.431137   57543 cache_images.go:92] duration metric: took 1.501680937s to LoadCachedImages
	W0719 19:19:41.431239   57543 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0719 19:19:41.431261   57543 kubeadm.go:934] updating node { 192.168.61.58 8443 v1.20.0 crio true true} ...
	I0719 19:19:41.431381   57543 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-821162 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-821162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 19:19:41.431458   57543 ssh_runner.go:195] Run: crio config
	I0719 19:19:41.485329   57543 cni.go:84] Creating CNI manager for ""
	I0719 19:19:41.485352   57543 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:19:41.485360   57543 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 19:19:41.485378   57543 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.58 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-821162 NodeName:kubernetes-upgrade-821162 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0719 19:19:41.485539   57543 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-821162"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.58
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.58"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 19:19:41.485607   57543 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0719 19:19:41.495409   57543 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 19:19:41.495480   57543 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 19:19:41.505461   57543 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0719 19:19:41.523773   57543 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 19:19:41.542004   57543 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0719 19:19:41.560665   57543 ssh_runner.go:195] Run: grep 192.168.61.58	control-plane.minikube.internal$ /etc/hosts
	I0719 19:19:41.565174   57543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:19:41.577742   57543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:19:41.725678   57543 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:19:41.743136   57543 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kubernetes-upgrade-821162 for IP: 192.168.61.58
	I0719 19:19:41.743162   57543 certs.go:194] generating shared ca certs ...
	I0719 19:19:41.743182   57543 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:19:41.743341   57543 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 19:19:41.743386   57543 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 19:19:41.743396   57543 certs.go:256] generating profile certs ...
	I0719 19:19:41.743445   57543 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kubernetes-upgrade-821162/client.key
	I0719 19:19:41.743458   57543 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kubernetes-upgrade-821162/client.crt with IP's: []
	I0719 19:19:41.868693   57543 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kubernetes-upgrade-821162/client.crt ...
	I0719 19:19:41.868738   57543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kubernetes-upgrade-821162/client.crt: {Name:mkb9034b8ddb30b7750b84f652927e3a24718812 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:19:41.868928   57543 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kubernetes-upgrade-821162/client.key ...
	I0719 19:19:41.868946   57543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kubernetes-upgrade-821162/client.key: {Name:mk2d056f7c37dd11b2469bcac78aa36ae2486067 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:19:41.869056   57543 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kubernetes-upgrade-821162/apiserver.key.ad468232
	I0719 19:19:41.869086   57543 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kubernetes-upgrade-821162/apiserver.crt.ad468232 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.58]
	I0719 19:19:42.153259   57543 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kubernetes-upgrade-821162/apiserver.crt.ad468232 ...
	I0719 19:19:42.153291   57543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kubernetes-upgrade-821162/apiserver.crt.ad468232: {Name:mkff33e1fc4df660d4a7c62b13e9177f59bc33b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:19:42.153459   57543 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kubernetes-upgrade-821162/apiserver.key.ad468232 ...
	I0719 19:19:42.153473   57543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kubernetes-upgrade-821162/apiserver.key.ad468232: {Name:mk3f3816ce9c4d6b8025bd81ff38da431bffb1a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:19:42.153539   57543 certs.go:381] copying /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kubernetes-upgrade-821162/apiserver.crt.ad468232 -> /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kubernetes-upgrade-821162/apiserver.crt
	I0719 19:19:42.153628   57543 certs.go:385] copying /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kubernetes-upgrade-821162/apiserver.key.ad468232 -> /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kubernetes-upgrade-821162/apiserver.key
	I0719 19:19:42.153696   57543 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kubernetes-upgrade-821162/proxy-client.key
	I0719 19:19:42.153716   57543 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kubernetes-upgrade-821162/proxy-client.crt with IP's: []
	I0719 19:19:42.364656   57543 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kubernetes-upgrade-821162/proxy-client.crt ...
	I0719 19:19:42.364697   57543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kubernetes-upgrade-821162/proxy-client.crt: {Name:mk0781fa7568faf913f889e8bf2057244f1ef649 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:19:42.384020   57543 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kubernetes-upgrade-821162/proxy-client.key ...
	I0719 19:19:42.384057   57543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kubernetes-upgrade-821162/proxy-client.key: {Name:mkc86aedeaa6d55393853d7c6757ee8114cca058 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:19:42.384308   57543 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 19:19:42.384359   57543 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 19:19:42.384371   57543 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 19:19:42.384412   57543 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 19:19:42.384443   57543 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 19:19:42.384471   57543 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 19:19:42.384518   57543 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:19:42.385137   57543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 19:19:42.410758   57543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 19:19:42.434758   57543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 19:19:42.462282   57543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 19:19:42.492386   57543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kubernetes-upgrade-821162/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0719 19:19:42.521947   57543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kubernetes-upgrade-821162/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 19:19:42.545455   57543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kubernetes-upgrade-821162/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 19:19:42.568734   57543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kubernetes-upgrade-821162/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 19:19:42.591624   57543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 19:19:42.614741   57543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 19:19:42.638993   57543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 19:19:42.661065   57543 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 19:19:42.679824   57543 ssh_runner.go:195] Run: openssl version
	I0719 19:19:42.688434   57543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 19:19:42.699295   57543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:19:42.706841   57543 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:19:42.706904   57543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:19:42.713269   57543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 19:19:42.726829   57543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 19:19:42.738345   57543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 19:19:42.742795   57543 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 19:19:42.742843   57543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 19:19:42.748483   57543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 19:19:42.759862   57543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 19:19:42.771235   57543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 19:19:42.776263   57543 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 19:19:42.776320   57543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 19:19:42.782381   57543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 19:19:42.793274   57543 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 19:19:42.797284   57543 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 19:19:42.797346   57543 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-821162 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-821162 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.58 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:19:42.797423   57543 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 19:19:42.797490   57543 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:19:42.842235   57543 cri.go:89] found id: ""
	I0719 19:19:42.842299   57543 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 19:19:42.852393   57543 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:19:42.862182   57543 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:19:42.871854   57543 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:19:42.871875   57543 kubeadm.go:157] found existing configuration files:
	
	I0719 19:19:42.871917   57543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:19:42.880655   57543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:19:42.880707   57543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:19:42.890253   57543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:19:42.898957   57543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:19:42.899017   57543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:19:42.908421   57543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:19:42.917500   57543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:19:42.917552   57543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:19:42.929642   57543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:19:42.938696   57543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:19:42.938762   57543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:19:42.947529   57543 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 19:19:43.218906   57543 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 19:21:40.488453   57543 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0719 19:21:40.488580   57543 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0719 19:21:40.489995   57543 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0719 19:21:40.490057   57543 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 19:21:40.490148   57543 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 19:21:40.490259   57543 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 19:21:40.490400   57543 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 19:21:40.490461   57543 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 19:21:40.492327   57543 out.go:204]   - Generating certificates and keys ...
	I0719 19:21:40.492410   57543 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 19:21:40.492514   57543 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 19:21:40.492623   57543 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0719 19:21:40.492701   57543 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0719 19:21:40.492784   57543 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0719 19:21:40.492875   57543 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0719 19:21:40.492960   57543 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0719 19:21:40.493123   57543 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-821162 localhost] and IPs [192.168.61.58 127.0.0.1 ::1]
	I0719 19:21:40.493236   57543 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0719 19:21:40.493388   57543 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-821162 localhost] and IPs [192.168.61.58 127.0.0.1 ::1]
	I0719 19:21:40.493466   57543 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0719 19:21:40.493565   57543 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0719 19:21:40.493622   57543 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0719 19:21:40.493695   57543 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 19:21:40.493810   57543 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 19:21:40.493893   57543 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 19:21:40.493984   57543 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 19:21:40.494070   57543 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 19:21:40.494224   57543 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 19:21:40.494361   57543 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 19:21:40.494426   57543 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 19:21:40.494515   57543 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 19:21:40.495925   57543 out.go:204]   - Booting up control plane ...
	I0719 19:21:40.496012   57543 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 19:21:40.496076   57543 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 19:21:40.496137   57543 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 19:21:40.496212   57543 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 19:21:40.496360   57543 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 19:21:40.496449   57543 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0719 19:21:40.496562   57543 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:21:40.496831   57543 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:21:40.496976   57543 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:21:40.497167   57543 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:21:40.497255   57543 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:21:40.497493   57543 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:21:40.497569   57543 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:21:40.497804   57543 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:21:40.497873   57543 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:21:40.498024   57543 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:21:40.498031   57543 kubeadm.go:310] 
	I0719 19:21:40.498077   57543 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0719 19:21:40.498132   57543 kubeadm.go:310] 		timed out waiting for the condition
	I0719 19:21:40.498149   57543 kubeadm.go:310] 
	I0719 19:21:40.498208   57543 kubeadm.go:310] 	This error is likely caused by:
	I0719 19:21:40.498254   57543 kubeadm.go:310] 		- The kubelet is not running
	I0719 19:21:40.498341   57543 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0719 19:21:40.498348   57543 kubeadm.go:310] 
	I0719 19:21:40.498486   57543 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0719 19:21:40.498545   57543 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0719 19:21:40.498592   57543 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0719 19:21:40.498601   57543 kubeadm.go:310] 
	I0719 19:21:40.498763   57543 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0719 19:21:40.498888   57543 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0719 19:21:40.498911   57543 kubeadm.go:310] 
	I0719 19:21:40.499061   57543 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0719 19:21:40.499198   57543 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0719 19:21:40.499325   57543 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0719 19:21:40.499429   57543 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0719 19:21:40.499454   57543 kubeadm.go:310] 
	W0719 19:21:40.499557   57543 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-821162 localhost] and IPs [192.168.61.58 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-821162 localhost] and IPs [192.168.61.58 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-821162 localhost] and IPs [192.168.61.58 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-821162 localhost] and IPs [192.168.61.58 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0719 19:21:40.499606   57543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 19:21:40.979041   57543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:21:40.993977   57543 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:21:41.006915   57543 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:21:41.006934   57543 kubeadm.go:157] found existing configuration files:
	
	I0719 19:21:41.006978   57543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:21:41.019057   57543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:21:41.019120   57543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:21:41.031498   57543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:21:41.042977   57543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:21:41.043091   57543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:21:41.054089   57543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:21:41.064654   57543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:21:41.064702   57543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:21:41.074880   57543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:21:41.083722   57543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:21:41.083772   57543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:21:41.095303   57543 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 19:21:41.330485   57543 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 19:23:37.446443   57543 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0719 19:23:37.446555   57543 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0719 19:23:37.448078   57543 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0719 19:23:37.448139   57543 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 19:23:37.448239   57543 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 19:23:37.448322   57543 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 19:23:37.448411   57543 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 19:23:37.448492   57543 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 19:23:37.450275   57543 out.go:204]   - Generating certificates and keys ...
	I0719 19:23:37.450388   57543 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 19:23:37.450484   57543 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 19:23:37.450597   57543 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 19:23:37.450689   57543 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 19:23:37.450796   57543 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 19:23:37.450881   57543 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 19:23:37.450966   57543 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 19:23:37.451053   57543 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 19:23:37.451175   57543 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 19:23:37.451292   57543 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 19:23:37.451352   57543 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 19:23:37.451432   57543 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 19:23:37.451502   57543 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 19:23:37.451574   57543 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 19:23:37.451666   57543 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 19:23:37.451745   57543 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 19:23:37.451913   57543 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 19:23:37.452020   57543 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 19:23:37.452109   57543 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 19:23:37.452220   57543 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 19:23:37.453907   57543 out.go:204]   - Booting up control plane ...
	I0719 19:23:37.454005   57543 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 19:23:37.454140   57543 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 19:23:37.454238   57543 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 19:23:37.454365   57543 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 19:23:37.454600   57543 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 19:23:37.454686   57543 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0719 19:23:37.454782   57543 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:23:37.454966   57543 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:23:37.455079   57543 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:23:37.455354   57543 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:23:37.455429   57543 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:23:37.455633   57543 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:23:37.455741   57543 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:23:37.455980   57543 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:23:37.456066   57543 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:23:37.456341   57543 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:23:37.456363   57543 kubeadm.go:310] 
	I0719 19:23:37.456414   57543 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0719 19:23:37.456475   57543 kubeadm.go:310] 		timed out waiting for the condition
	I0719 19:23:37.456488   57543 kubeadm.go:310] 
	I0719 19:23:37.456524   57543 kubeadm.go:310] 	This error is likely caused by:
	I0719 19:23:37.456564   57543 kubeadm.go:310] 		- The kubelet is not running
	I0719 19:23:37.456691   57543 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0719 19:23:37.456705   57543 kubeadm.go:310] 
	I0719 19:23:37.456836   57543 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0719 19:23:37.456888   57543 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0719 19:23:37.456919   57543 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0719 19:23:37.456927   57543 kubeadm.go:310] 
	I0719 19:23:37.457098   57543 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0719 19:23:37.457222   57543 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0719 19:23:37.457231   57543 kubeadm.go:310] 
	I0719 19:23:37.457394   57543 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0719 19:23:37.457520   57543 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0719 19:23:37.457637   57543 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0719 19:23:37.457739   57543 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0719 19:23:37.457761   57543 kubeadm.go:310] 
	I0719 19:23:37.457838   57543 kubeadm.go:394] duration metric: took 3m54.66049415s to StartCluster
	I0719 19:23:37.457885   57543 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:23:37.457953   57543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:23:37.505506   57543 cri.go:89] found id: ""
	I0719 19:23:37.505539   57543 logs.go:276] 0 containers: []
	W0719 19:23:37.505549   57543 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:23:37.505557   57543 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:23:37.505622   57543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:23:37.537742   57543 cri.go:89] found id: ""
	I0719 19:23:37.537765   57543 logs.go:276] 0 containers: []
	W0719 19:23:37.537772   57543 logs.go:278] No container was found matching "etcd"
	I0719 19:23:37.537778   57543 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:23:37.537828   57543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:23:37.569905   57543 cri.go:89] found id: ""
	I0719 19:23:37.569929   57543 logs.go:276] 0 containers: []
	W0719 19:23:37.569937   57543 logs.go:278] No container was found matching "coredns"
	I0719 19:23:37.569943   57543 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:23:37.569988   57543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:23:37.603380   57543 cri.go:89] found id: ""
	I0719 19:23:37.603403   57543 logs.go:276] 0 containers: []
	W0719 19:23:37.603410   57543 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:23:37.603415   57543 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:23:37.603462   57543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:23:37.636351   57543 cri.go:89] found id: ""
	I0719 19:23:37.636383   57543 logs.go:276] 0 containers: []
	W0719 19:23:37.636392   57543 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:23:37.636400   57543 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:23:37.636464   57543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:23:37.667105   57543 cri.go:89] found id: ""
	I0719 19:23:37.667141   57543 logs.go:276] 0 containers: []
	W0719 19:23:37.667152   57543 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:23:37.667160   57543 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:23:37.667222   57543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:23:37.698938   57543 cri.go:89] found id: ""
	I0719 19:23:37.698978   57543 logs.go:276] 0 containers: []
	W0719 19:23:37.698989   57543 logs.go:278] No container was found matching "kindnet"
	I0719 19:23:37.699002   57543 logs.go:123] Gathering logs for kubelet ...
	I0719 19:23:37.699022   57543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:23:37.754487   57543 logs.go:123] Gathering logs for dmesg ...
	I0719 19:23:37.754518   57543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:23:37.767334   57543 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:23:37.767367   57543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:23:37.887139   57543 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:23:37.887167   57543 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:23:37.887184   57543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:23:37.999818   57543 logs.go:123] Gathering logs for container status ...
	I0719 19:23:37.999873   57543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0719 19:23:38.040221   57543 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0719 19:23:38.040263   57543 out.go:239] * 
	* 
	W0719 19:23:38.040324   57543 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0719 19:23:38.040367   57543 out.go:239] * 
	* 
	W0719 19:23:38.041348   57543 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 19:23:38.044374   57543 out.go:177] 
	W0719 19:23:38.045682   57543 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0719 19:23:38.045744   57543 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0719 19:23:38.045759   57543 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0719 19:23:38.047005   57543 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-821162 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-821162
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-821162: (6.316162687s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-821162 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-821162 status --format={{.Host}}: exit status 7 (75.587918ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-821162 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-821162 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.176166863s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-821162 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-821162 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-821162 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (85.279388ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-821162] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19307
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19307-14841/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19307-14841/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-821162
	    minikube start -p kubernetes-upgrade-821162 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8211622 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-821162 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-821162 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0719 19:24:43.525120   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/functional-788436/client.crt: no such file or directory
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-821162 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (52.733361137s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-07-19 19:25:23.543649502 +0000 UTC m=+4297.807412491
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-821162 -n kubernetes-upgrade-821162
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-821162 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-821162 logs -n 25: (2.438268696s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-179125 sudo cat                            | cilium-179125             | jenkins | v1.33.1 | 19 Jul 24 19:24 UTC |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-179125 sudo docker                         | cilium-179125             | jenkins | v1.33.1 | 19 Jul 24 19:24 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-179125 sudo                                | cilium-179125             | jenkins | v1.33.1 | 19 Jul 24 19:24 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-179125 sudo                                | cilium-179125             | jenkins | v1.33.1 | 19 Jul 24 19:24 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-179125 sudo cat                            | cilium-179125             | jenkins | v1.33.1 | 19 Jul 24 19:24 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-179125 sudo cat                            | cilium-179125             | jenkins | v1.33.1 | 19 Jul 24 19:24 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-179125 sudo                                | cilium-179125             | jenkins | v1.33.1 | 19 Jul 24 19:24 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-179125 sudo                                | cilium-179125             | jenkins | v1.33.1 | 19 Jul 24 19:24 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-179125 sudo                                | cilium-179125             | jenkins | v1.33.1 | 19 Jul 24 19:24 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-179125 sudo cat                            | cilium-179125             | jenkins | v1.33.1 | 19 Jul 24 19:24 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-179125 sudo cat                            | cilium-179125             | jenkins | v1.33.1 | 19 Jul 24 19:24 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-179125 sudo                                | cilium-179125             | jenkins | v1.33.1 | 19 Jul 24 19:24 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-179125 sudo                                | cilium-179125             | jenkins | v1.33.1 | 19 Jul 24 19:24 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-179125 sudo                                | cilium-179125             | jenkins | v1.33.1 | 19 Jul 24 19:24 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-179125 sudo find                           | cilium-179125             | jenkins | v1.33.1 | 19 Jul 24 19:24 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-179125 sudo crio                           | cilium-179125             | jenkins | v1.33.1 | 19 Jul 24 19:24 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-179125                                     | cilium-179125             | jenkins | v1.33.1 | 19 Jul 24 19:24 UTC | 19 Jul 24 19:24 UTC |
	| start   | -p pause-666146 --memory=2048                        | pause-666146              | jenkins | v1.33.1 | 19 Jul 24 19:24 UTC | 19 Jul 24 19:25 UTC |
	|         | --install-addons=false                               |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                             |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-982139                            | stopped-upgrade-982139    | jenkins | v1.33.1 | 19 Jul 24 19:24 UTC | 19 Jul 24 19:24 UTC |
	| start   | -p kubernetes-upgrade-821162                         | kubernetes-upgrade-821162 | jenkins | v1.33.1 | 19 Jul 24 19:24 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-821162                         | kubernetes-upgrade-821162 | jenkins | v1.33.1 | 19 Jul 24 19:24 UTC | 19 Jul 24 19:25 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                  |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-570369                            | old-k8s-version-570369    | jenkins | v1.33.1 | 19 Jul 24 19:24 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --kvm-network=default                                |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                           |         |         |                     |                     |
	|         | --keep-context=false                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-197194                            | running-upgrade-197194    | jenkins | v1.33.1 | 19 Jul 24 19:24 UTC | 19 Jul 24 19:24 UTC |
	| start   | -p no-preload-971041 --memory=2200                   | no-preload-971041         | jenkins | v1.33.1 | 19 Jul 24 19:24 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                            |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                  |                           |         |         |                     |                     |
	| start   | -p pause-666146                                      | pause-666146              | jenkins | v1.33.1 | 19 Jul 24 19:25 UTC |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 19:25:13
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 19:25:13.530128   65498 out.go:291] Setting OutFile to fd 1 ...
	I0719 19:25:13.530424   65498 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:25:13.530436   65498 out.go:304] Setting ErrFile to fd 2...
	I0719 19:25:13.530443   65498 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:25:13.530723   65498 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 19:25:13.531336   65498 out.go:298] Setting JSON to false
	I0719 19:25:13.532544   65498 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7656,"bootTime":1721409457,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 19:25:13.532623   65498 start.go:139] virtualization: kvm guest
	I0719 19:25:13.535265   65498 out.go:177] * [pause-666146] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 19:25:13.536582   65498 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 19:25:13.536617   65498 notify.go:220] Checking for updates...
	I0719 19:25:13.537989   65498 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 19:25:13.539296   65498 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:25:13.540599   65498 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 19:25:13.541822   65498 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 19:25:13.543132   65498 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 19:25:13.545083   65498 config.go:182] Loaded profile config "pause-666146": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:25:13.545721   65498 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:25:13.545826   65498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:25:13.567446   65498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46229
	I0719 19:25:13.568033   65498 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:25:13.568918   65498 main.go:141] libmachine: Using API Version  1
	I0719 19:25:13.568939   65498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:25:13.569325   65498 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:25:13.569544   65498 main.go:141] libmachine: (pause-666146) Calling .DriverName
	I0719 19:25:13.569803   65498 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 19:25:13.570256   65498 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:25:13.570343   65498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:25:13.585399   65498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35271
	I0719 19:25:13.585952   65498 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:25:13.586514   65498 main.go:141] libmachine: Using API Version  1
	I0719 19:25:13.586538   65498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:25:13.586848   65498 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:25:13.587026   65498 main.go:141] libmachine: (pause-666146) Calling .DriverName
	I0719 19:25:13.620776   65498 out.go:177] * Using the kvm2 driver based on existing profile
	I0719 19:25:13.621957   65498 start.go:297] selected driver: kvm2
	I0719 19:25:13.621983   65498 start.go:901] validating driver "kvm2" against &{Name:pause-666146 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.3 ClusterName:pause-666146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.247 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:25:13.622149   65498 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 19:25:13.622582   65498 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 19:25:13.622662   65498 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19307-14841/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 19:25:13.642080   65498 install.go:137] /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0719 19:25:13.642682   65498 cni.go:84] Creating CNI manager for ""
	I0719 19:25:13.642690   65498 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:25:13.642750   65498 start.go:340] cluster config:
	{Name:pause-666146 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:pause-666146 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.247 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:25:13.642867   65498 iso.go:125] acquiring lock: {Name:mka126a3710ab8a115eb2a3299b735524430e5f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 19:25:13.644491   65498 out.go:177] * Starting "pause-666146" primary control-plane node in "pause-666146" cluster
	I0719 19:25:11.787478   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | SSH cmd err, output: <nil>: 
	I0719 19:25:11.787738   65113 main.go:141] libmachine: (old-k8s-version-570369) KVM machine creation complete!
	I0719 19:25:11.788066   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetConfigRaw
	I0719 19:25:11.788613   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:25:11.788791   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:25:11.788926   65113 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0719 19:25:11.788976   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetState
	I0719 19:25:11.790278   65113 main.go:141] libmachine: Detecting operating system of created instance...
	I0719 19:25:11.790295   65113 main.go:141] libmachine: Waiting for SSH to be available...
	I0719 19:25:11.790318   65113 main.go:141] libmachine: Getting to WaitForSSH function...
	I0719 19:25:11.790337   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:25:11.792743   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:11.793146   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:25:06 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:25:11.793174   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:11.793305   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:25:11.793489   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:25:11.793653   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:25:11.793805   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:25:11.793956   65113 main.go:141] libmachine: Using SSH client type: native
	I0719 19:25:11.794169   65113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:25:11.794182   65113 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0719 19:25:11.898823   65113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:25:11.898850   65113 main.go:141] libmachine: Detecting the provisioner...
	I0719 19:25:11.898862   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:25:11.901673   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:11.901959   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:25:06 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:25:11.901981   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:11.902159   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:25:11.902328   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:25:11.902544   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:25:11.902689   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:25:11.902880   65113 main.go:141] libmachine: Using SSH client type: native
	I0719 19:25:11.903047   65113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:25:11.903062   65113 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0719 19:25:12.012012   65113 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0719 19:25:12.012124   65113 main.go:141] libmachine: found compatible host: buildroot
	I0719 19:25:12.012140   65113 main.go:141] libmachine: Provisioning with buildroot...
	I0719 19:25:12.012151   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetMachineName
	I0719 19:25:12.012393   65113 buildroot.go:166] provisioning hostname "old-k8s-version-570369"
	I0719 19:25:12.012418   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetMachineName
	I0719 19:25:12.012624   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:25:12.014909   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.015192   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:25:06 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:25:12.015233   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.015387   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:25:12.015576   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:25:12.015730   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:25:12.015894   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:25:12.016054   65113 main.go:141] libmachine: Using SSH client type: native
	I0719 19:25:12.016220   65113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:25:12.016234   65113 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-570369 && echo "old-k8s-version-570369" | sudo tee /etc/hostname
	I0719 19:25:12.132568   65113 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-570369
	
	I0719 19:25:12.132598   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:25:12.135645   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.136106   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:25:06 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:25:12.136129   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.136269   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:25:12.136490   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:25:12.136695   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:25:12.136840   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:25:12.136996   65113 main.go:141] libmachine: Using SSH client type: native
	I0719 19:25:12.137198   65113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:25:12.137225   65113 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-570369' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-570369/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-570369' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 19:25:12.251957   65113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:25:12.251983   65113 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 19:25:12.252024   65113 buildroot.go:174] setting up certificates
	I0719 19:25:12.252033   65113 provision.go:84] configureAuth start
	I0719 19:25:12.252042   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetMachineName
	I0719 19:25:12.252310   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetIP
	I0719 19:25:12.254951   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.255374   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:25:06 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:25:12.255398   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.255649   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:25:12.258035   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.258326   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:25:06 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:25:12.258348   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.258499   65113 provision.go:143] copyHostCerts
	I0719 19:25:12.258578   65113 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 19:25:12.258591   65113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 19:25:12.258646   65113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 19:25:12.258748   65113 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 19:25:12.258756   65113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 19:25:12.258776   65113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 19:25:12.258868   65113 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 19:25:12.258876   65113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 19:25:12.258895   65113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 19:25:12.258949   65113 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-570369 san=[127.0.0.1 192.168.39.220 localhost minikube old-k8s-version-570369]
	I0719 19:25:12.400305   65113 provision.go:177] copyRemoteCerts
	I0719 19:25:12.400358   65113 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 19:25:12.400383   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:25:12.403043   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.403426   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:25:06 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:25:12.403465   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.403618   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:25:12.403855   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:25:12.404016   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:25:12.404212   65113 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa Username:docker}
	I0719 19:25:12.494424   65113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 19:25:12.518045   65113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0719 19:25:12.539532   65113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 19:25:12.561104   65113 provision.go:87] duration metric: took 309.061303ms to configureAuth
	I0719 19:25:12.561132   65113 buildroot.go:189] setting minikube options for container-runtime
	I0719 19:25:12.561318   65113 config.go:182] Loaded profile config "old-k8s-version-570369": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0719 19:25:12.561386   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:25:12.563974   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.564298   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:25:06 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:25:12.564318   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.564507   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:25:12.564707   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:25:12.564869   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:25:12.565004   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:25:12.565155   65113 main.go:141] libmachine: Using SSH client type: native
	I0719 19:25:12.565315   65113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:25:12.565337   65113 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 19:25:12.845369   65113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 19:25:12.845396   65113 main.go:141] libmachine: Checking connection to Docker...
	I0719 19:25:12.845406   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetURL
	I0719 19:25:12.846599   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | Using libvirt version 6000000
	I0719 19:25:12.848848   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.849174   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:25:06 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:25:12.849204   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.849424   65113 main.go:141] libmachine: Docker is up and running!
	I0719 19:25:12.849439   65113 main.go:141] libmachine: Reticulating splines...
	I0719 19:25:12.849447   65113 client.go:171] duration metric: took 20.775069344s to LocalClient.Create
	I0719 19:25:12.849473   65113 start.go:167] duration metric: took 20.775132716s to libmachine.API.Create "old-k8s-version-570369"
	I0719 19:25:12.849485   65113 start.go:293] postStartSetup for "old-k8s-version-570369" (driver="kvm2")
	I0719 19:25:12.849514   65113 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 19:25:12.849537   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:25:12.849791   65113 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 19:25:12.849816   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:25:12.852334   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.852655   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:25:06 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:25:12.852682   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.852867   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:25:12.853055   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:25:12.853189   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:25:12.853336   65113 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa Username:docker}
	I0719 19:25:12.934181   65113 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 19:25:12.938481   65113 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 19:25:12.938508   65113 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 19:25:12.938572   65113 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 19:25:12.938698   65113 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 19:25:12.938817   65113 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 19:25:12.947871   65113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:25:12.970064   65113 start.go:296] duration metric: took 120.546471ms for postStartSetup
	I0719 19:25:12.970121   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetConfigRaw
	I0719 19:25:12.970687   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetIP
	I0719 19:25:12.973371   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.973938   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:25:06 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:25:12.973972   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.974312   65113 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/config.json ...
	I0719 19:25:12.974498   65113 start.go:128] duration metric: took 20.921561061s to createHost
	I0719 19:25:12.974522   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:25:12.976940   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.977308   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:25:06 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:25:12.977344   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.977522   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:25:12.977716   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:25:12.977888   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:25:12.978055   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:25:12.978235   65113 main.go:141] libmachine: Using SSH client type: native
	I0719 19:25:12.978452   65113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:25:12.978465   65113 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 19:25:13.084063   65113 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721417113.046702003
	
	I0719 19:25:13.084091   65113 fix.go:216] guest clock: 1721417113.046702003
	I0719 19:25:13.084101   65113 fix.go:229] Guest: 2024-07-19 19:25:13.046702003 +0000 UTC Remote: 2024-07-19 19:25:12.974509819 +0000 UTC m=+26.255657653 (delta=72.192184ms)
	I0719 19:25:13.084119   65113 fix.go:200] guest clock delta is within tolerance: 72.192184ms
	I0719 19:25:13.084124   65113 start.go:83] releasing machines lock for "old-k8s-version-570369", held for 21.031406854s
	I0719 19:25:13.084148   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:25:13.084465   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetIP
	I0719 19:25:13.087226   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:13.087634   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:25:06 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:25:13.087666   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:13.087824   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:25:13.088389   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:25:13.088565   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:25:13.088659   65113 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 19:25:13.088703   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:25:13.088760   65113 ssh_runner.go:195] Run: cat /version.json
	I0719 19:25:13.088783   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:25:13.091368   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:13.091732   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:25:06 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:25:13.091821   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:13.091875   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:13.091904   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:25:13.092069   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:25:13.092221   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:25:13.092242   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:25:06 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:25:13.092279   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:13.092342   65113 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa Username:docker}
	I0719 19:25:13.092418   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:25:13.092555   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:25:13.092703   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:25:13.092877   65113 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa Username:docker}
	I0719 19:25:13.222667   65113 ssh_runner.go:195] Run: systemctl --version
	I0719 19:25:13.229930   65113 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 19:25:13.389152   65113 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 19:25:13.397366   65113 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 19:25:13.397434   65113 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 19:25:13.417521   65113 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 19:25:13.417550   65113 start.go:495] detecting cgroup driver to use...
	I0719 19:25:13.417617   65113 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 19:25:13.435143   65113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 19:25:13.452657   65113 docker.go:217] disabling cri-docker service (if available) ...
	I0719 19:25:13.452726   65113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 19:25:13.470809   65113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 19:25:13.486767   65113 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 19:25:13.624734   65113 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 19:25:13.781004   65113 docker.go:233] disabling docker service ...
	I0719 19:25:13.781083   65113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 19:25:13.796845   65113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 19:25:13.812336   65113 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 19:25:13.962842   65113 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 19:25:14.090594   65113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 19:25:14.104569   65113 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 19:25:14.121918   65113 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0719 19:25:14.121982   65113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:25:14.131932   65113 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 19:25:14.131994   65113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:25:14.144530   65113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:25:14.154355   65113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:25:14.163859   65113 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 19:25:14.173710   65113 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 19:25:14.182589   65113 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 19:25:14.182632   65113 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 19:25:14.194812   65113 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 19:25:14.203921   65113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:25:14.318322   65113 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 19:25:14.495411   65113 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 19:25:14.495488   65113 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 19:25:14.501604   65113 start.go:563] Will wait 60s for crictl version
	I0719 19:25:14.501662   65113 ssh_runner.go:195] Run: which crictl
	I0719 19:25:14.505975   65113 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 19:25:14.546509   65113 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 19:25:14.546618   65113 ssh_runner.go:195] Run: crio --version
	I0719 19:25:14.588192   65113 ssh_runner.go:195] Run: crio --version
	I0719 19:25:14.623792   65113 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0719 19:25:12.617698   64960 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 782b4ec714830c622aa8fc1de76b5a449990a0131e0af90161f96b3e9e6b5cb5 c995a95003e5bd6fb33c749b03b501a8274f45a1c5847fc2f9481377f8c5898c 802d81d6a0406907b1a09ee4ee2a6aa330d35f6fc9949e0116ce0ef43354c55c d6cf9f513c5aa0e6536eed0e9ddbd3baf433290e1cadef3b8972759b920e67a5 705f767a43ac3b184f0c52f9a8dd5ef5b522a77c553045cd0536b78030c787c2 c0c518b2143da72280d1024ce044d00b0434cbe8863b75dcca3522bd2e7fbe9d d91559cc099e6bddbbb42189960b9aa49f4f070791393d54f2543b708cef05f4 2cdac3472a1b5cba1771f85cf9d3e03fc0c9db1c3832e7287b8f693919861f7e cf5b43bd9e918848f34a40c31e0f615aeede82fff60f1d4bf338c3ea893ff420 47783fa17af8a7fbaf1cfe0af23d5d9078071c850f5ab647767c0a61be1c5dc5 4896d2d70a9478b571c1d368971057c79b809d4891050afae5788bdcc6222164 0087881605c8ce4739c00d29fa7301c0de6050b2f69eb0144c7793ff16d13f4b: (10.618953835s)
	W0719 19:25:12.617788   64960 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 782b4ec714830c622aa8fc1de76b5a449990a0131e0af90161f96b3e9e6b5cb5 c995a95003e5bd6fb33c749b03b501a8274f45a1c5847fc2f9481377f8c5898c 802d81d6a0406907b1a09ee4ee2a6aa330d35f6fc9949e0116ce0ef43354c55c d6cf9f513c5aa0e6536eed0e9ddbd3baf433290e1cadef3b8972759b920e67a5 705f767a43ac3b184f0c52f9a8dd5ef5b522a77c553045cd0536b78030c787c2 c0c518b2143da72280d1024ce044d00b0434cbe8863b75dcca3522bd2e7fbe9d d91559cc099e6bddbbb42189960b9aa49f4f070791393d54f2543b708cef05f4 2cdac3472a1b5cba1771f85cf9d3e03fc0c9db1c3832e7287b8f693919861f7e cf5b43bd9e918848f34a40c31e0f615aeede82fff60f1d4bf338c3ea893ff420 47783fa17af8a7fbaf1cfe0af23d5d9078071c850f5ab647767c0a61be1c5dc5 4896d2d70a9478b571c1d368971057c79b809d4891050afae5788bdcc6222164 0087881605c8ce4739c00d29fa7301c0de6050b2f69eb0144c7793ff16d13f4b: Process exited with status 1
	stdout:
	782b4ec714830c622aa8fc1de76b5a449990a0131e0af90161f96b3e9e6b5cb5
	c995a95003e5bd6fb33c749b03b501a8274f45a1c5847fc2f9481377f8c5898c
	802d81d6a0406907b1a09ee4ee2a6aa330d35f6fc9949e0116ce0ef43354c55c
	d6cf9f513c5aa0e6536eed0e9ddbd3baf433290e1cadef3b8972759b920e67a5
	705f767a43ac3b184f0c52f9a8dd5ef5b522a77c553045cd0536b78030c787c2
	c0c518b2143da72280d1024ce044d00b0434cbe8863b75dcca3522bd2e7fbe9d
	d91559cc099e6bddbbb42189960b9aa49f4f070791393d54f2543b708cef05f4
	2cdac3472a1b5cba1771f85cf9d3e03fc0c9db1c3832e7287b8f693919861f7e
	
	stderr:
	E0719 19:25:12.609299    3098 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf5b43bd9e918848f34a40c31e0f615aeede82fff60f1d4bf338c3ea893ff420\": container with ID starting with cf5b43bd9e918848f34a40c31e0f615aeede82fff60f1d4bf338c3ea893ff420 not found: ID does not exist" containerID="cf5b43bd9e918848f34a40c31e0f615aeede82fff60f1d4bf338c3ea893ff420"
	time="2024-07-19T19:25:12Z" level=fatal msg="stopping the container \"cf5b43bd9e918848f34a40c31e0f615aeede82fff60f1d4bf338c3ea893ff420\": rpc error: code = NotFound desc = could not find container \"cf5b43bd9e918848f34a40c31e0f615aeede82fff60f1d4bf338c3ea893ff420\": container with ID starting with cf5b43bd9e918848f34a40c31e0f615aeede82fff60f1d4bf338c3ea893ff420 not found: ID does not exist"
	I0719 19:25:12.617856   64960 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 19:25:12.659051   64960 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:25:12.669220   64960 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Jul 19 19:24 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Jul 19 19:24 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5755 Jul 19 19:24 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5605 Jul 19 19:24 /etc/kubernetes/scheduler.conf
	
	I0719 19:25:12.669296   64960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:25:12.680503   64960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:25:12.692872   64960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:25:12.701456   64960 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0719 19:25:12.701510   64960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:25:12.713795   64960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:25:12.724886   64960 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0719 19:25:12.724957   64960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:25:12.737529   64960 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:25:12.749904   64960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:25:12.801042   64960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:25:13.555644   64960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:25:13.820516   64960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:25:13.875291   64960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:25:13.945178   64960 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:25:13.945258   64960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:25:14.445461   64960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:25:14.946118   64960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:25:14.972222   64960 api_server.go:72] duration metric: took 1.027041771s to wait for apiserver process to appear ...
	I0719 19:25:14.972252   64960 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:25:14.972272   64960 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8443/healthz ...
	I0719 19:25:13.086220   65336 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 19:25:13.086539   65336 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:25:13.086578   65336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:25:13.103162   65336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35263
	I0719 19:25:13.103645   65336 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:25:13.104216   65336 main.go:141] libmachine: Using API Version  1
	I0719 19:25:13.104238   65336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:25:13.104583   65336 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:25:13.104757   65336 main.go:141] libmachine: (no-preload-971041) Calling .GetMachineName
	I0719 19:25:13.104886   65336 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:25:13.104999   65336 start.go:159] libmachine.API.Create for "no-preload-971041" (driver="kvm2")
	I0719 19:25:13.105021   65336 client.go:168] LocalClient.Create starting
	I0719 19:25:13.105052   65336 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem
	I0719 19:25:13.105087   65336 main.go:141] libmachine: Decoding PEM data...
	I0719 19:25:13.105102   65336 main.go:141] libmachine: Parsing certificate...
	I0719 19:25:13.105158   65336 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem
	I0719 19:25:13.105180   65336 main.go:141] libmachine: Decoding PEM data...
	I0719 19:25:13.105191   65336 main.go:141] libmachine: Parsing certificate...
	I0719 19:25:13.105204   65336 main.go:141] libmachine: Running pre-create checks...
	I0719 19:25:13.105212   65336 main.go:141] libmachine: (no-preload-971041) Calling .PreCreateCheck
	I0719 19:25:13.105484   65336 main.go:141] libmachine: (no-preload-971041) Calling .GetConfigRaw
	I0719 19:25:13.105900   65336 main.go:141] libmachine: Creating machine...
	I0719 19:25:13.105924   65336 main.go:141] libmachine: (no-preload-971041) Calling .Create
	I0719 19:25:13.106057   65336 main.go:141] libmachine: (no-preload-971041) Creating KVM machine...
	I0719 19:25:13.107227   65336 main.go:141] libmachine: (no-preload-971041) DBG | found existing default KVM network
	I0719 19:25:13.108983   65336 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:25:13.108817   65444 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:bd:0b:6a} reservation:<nil>}
	I0719 19:25:13.110177   65336 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:25:13.110103   65444 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002da1f0}
	I0719 19:25:13.110265   65336 main.go:141] libmachine: (no-preload-971041) DBG | created network xml: 
	I0719 19:25:13.110292   65336 main.go:141] libmachine: (no-preload-971041) DBG | <network>
	I0719 19:25:13.110326   65336 main.go:141] libmachine: (no-preload-971041) DBG |   <name>mk-no-preload-971041</name>
	I0719 19:25:13.110332   65336 main.go:141] libmachine: (no-preload-971041) DBG |   <dns enable='no'/>
	I0719 19:25:13.110338   65336 main.go:141] libmachine: (no-preload-971041) DBG |   
	I0719 19:25:13.110346   65336 main.go:141] libmachine: (no-preload-971041) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0719 19:25:13.110355   65336 main.go:141] libmachine: (no-preload-971041) DBG |     <dhcp>
	I0719 19:25:13.110364   65336 main.go:141] libmachine: (no-preload-971041) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0719 19:25:13.110395   65336 main.go:141] libmachine: (no-preload-971041) DBG |     </dhcp>
	I0719 19:25:13.110413   65336 main.go:141] libmachine: (no-preload-971041) DBG |   </ip>
	I0719 19:25:13.110432   65336 main.go:141] libmachine: (no-preload-971041) DBG |   
	I0719 19:25:13.110438   65336 main.go:141] libmachine: (no-preload-971041) DBG | </network>
	I0719 19:25:13.110445   65336 main.go:141] libmachine: (no-preload-971041) DBG | 
	I0719 19:25:13.115774   65336 main.go:141] libmachine: (no-preload-971041) DBG | trying to create private KVM network mk-no-preload-971041 192.168.50.0/24...
	I0719 19:25:13.184397   65336 main.go:141] libmachine: (no-preload-971041) DBG | private KVM network mk-no-preload-971041 192.168.50.0/24 created
	I0719 19:25:13.184432   65336 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:25:13.184342   65444 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 19:25:13.184445   65336 main.go:141] libmachine: (no-preload-971041) Setting up store path in /home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041 ...
	I0719 19:25:13.184474   65336 main.go:141] libmachine: (no-preload-971041) Building disk image from file:///home/jenkins/minikube-integration/19307-14841/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 19:25:13.184540   65336 main.go:141] libmachine: (no-preload-971041) Downloading /home/jenkins/minikube-integration/19307-14841/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19307-14841/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 19:25:13.435386   65336 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:25:13.435254   65444 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa...
	I0719 19:25:13.712684   65336 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:25:13.712532   65444 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/no-preload-971041.rawdisk...
	I0719 19:25:13.712718   65336 main.go:141] libmachine: (no-preload-971041) DBG | Writing magic tar header
	I0719 19:25:13.712742   65336 main.go:141] libmachine: (no-preload-971041) DBG | Writing SSH key tar header
	I0719 19:25:13.712767   65336 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:25:13.712644   65444 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041 ...
	I0719 19:25:13.712786   65336 main.go:141] libmachine: (no-preload-971041) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041
	I0719 19:25:13.712800   65336 main.go:141] libmachine: (no-preload-971041) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841/.minikube/machines
	I0719 19:25:13.712815   65336 main.go:141] libmachine: (no-preload-971041) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041 (perms=drwx------)
	I0719 19:25:13.712828   65336 main.go:141] libmachine: (no-preload-971041) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 19:25:13.712845   65336 main.go:141] libmachine: (no-preload-971041) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841
	I0719 19:25:13.712854   65336 main.go:141] libmachine: (no-preload-971041) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0719 19:25:13.712871   65336 main.go:141] libmachine: (no-preload-971041) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841/.minikube/machines (perms=drwxr-xr-x)
	I0719 19:25:13.712886   65336 main.go:141] libmachine: (no-preload-971041) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841/.minikube (perms=drwxr-xr-x)
	I0719 19:25:13.712902   65336 main.go:141] libmachine: (no-preload-971041) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841 (perms=drwxrwxr-x)
	I0719 19:25:13.712917   65336 main.go:141] libmachine: (no-preload-971041) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0719 19:25:13.712926   65336 main.go:141] libmachine: (no-preload-971041) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0719 19:25:13.712939   65336 main.go:141] libmachine: (no-preload-971041) DBG | Checking permissions on dir: /home/jenkins
	I0719 19:25:13.712951   65336 main.go:141] libmachine: (no-preload-971041) DBG | Checking permissions on dir: /home
	I0719 19:25:13.712960   65336 main.go:141] libmachine: (no-preload-971041) Creating domain...
	I0719 19:25:13.712976   65336 main.go:141] libmachine: (no-preload-971041) DBG | Skipping /home - not owner
	I0719 19:25:13.714239   65336 main.go:141] libmachine: (no-preload-971041) define libvirt domain using xml: 
	I0719 19:25:13.714264   65336 main.go:141] libmachine: (no-preload-971041) <domain type='kvm'>
	I0719 19:25:13.714275   65336 main.go:141] libmachine: (no-preload-971041)   <name>no-preload-971041</name>
	I0719 19:25:13.714290   65336 main.go:141] libmachine: (no-preload-971041)   <memory unit='MiB'>2200</memory>
	I0719 19:25:13.714327   65336 main.go:141] libmachine: (no-preload-971041)   <vcpu>2</vcpu>
	I0719 19:25:13.714352   65336 main.go:141] libmachine: (no-preload-971041)   <features>
	I0719 19:25:13.714364   65336 main.go:141] libmachine: (no-preload-971041)     <acpi/>
	I0719 19:25:13.714379   65336 main.go:141] libmachine: (no-preload-971041)     <apic/>
	I0719 19:25:13.714398   65336 main.go:141] libmachine: (no-preload-971041)     <pae/>
	I0719 19:25:13.714407   65336 main.go:141] libmachine: (no-preload-971041)     
	I0719 19:25:13.714420   65336 main.go:141] libmachine: (no-preload-971041)   </features>
	I0719 19:25:13.714433   65336 main.go:141] libmachine: (no-preload-971041)   <cpu mode='host-passthrough'>
	I0719 19:25:13.714446   65336 main.go:141] libmachine: (no-preload-971041)   
	I0719 19:25:13.714461   65336 main.go:141] libmachine: (no-preload-971041)   </cpu>
	I0719 19:25:13.714473   65336 main.go:141] libmachine: (no-preload-971041)   <os>
	I0719 19:25:13.714481   65336 main.go:141] libmachine: (no-preload-971041)     <type>hvm</type>
	I0719 19:25:13.714490   65336 main.go:141] libmachine: (no-preload-971041)     <boot dev='cdrom'/>
	I0719 19:25:13.714497   65336 main.go:141] libmachine: (no-preload-971041)     <boot dev='hd'/>
	I0719 19:25:13.714506   65336 main.go:141] libmachine: (no-preload-971041)     <bootmenu enable='no'/>
	I0719 19:25:13.714511   65336 main.go:141] libmachine: (no-preload-971041)   </os>
	I0719 19:25:13.714524   65336 main.go:141] libmachine: (no-preload-971041)   <devices>
	I0719 19:25:13.714532   65336 main.go:141] libmachine: (no-preload-971041)     <disk type='file' device='cdrom'>
	I0719 19:25:13.714568   65336 main.go:141] libmachine: (no-preload-971041)       <source file='/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/boot2docker.iso'/>
	I0719 19:25:13.714598   65336 main.go:141] libmachine: (no-preload-971041)       <target dev='hdc' bus='scsi'/>
	I0719 19:25:13.714607   65336 main.go:141] libmachine: (no-preload-971041)       <readonly/>
	I0719 19:25:13.714612   65336 main.go:141] libmachine: (no-preload-971041)     </disk>
	I0719 19:25:13.714619   65336 main.go:141] libmachine: (no-preload-971041)     <disk type='file' device='disk'>
	I0719 19:25:13.714625   65336 main.go:141] libmachine: (no-preload-971041)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0719 19:25:13.714634   65336 main.go:141] libmachine: (no-preload-971041)       <source file='/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/no-preload-971041.rawdisk'/>
	I0719 19:25:13.714645   65336 main.go:141] libmachine: (no-preload-971041)       <target dev='hda' bus='virtio'/>
	I0719 19:25:13.714651   65336 main.go:141] libmachine: (no-preload-971041)     </disk>
	I0719 19:25:13.714659   65336 main.go:141] libmachine: (no-preload-971041)     <interface type='network'>
	I0719 19:25:13.714671   65336 main.go:141] libmachine: (no-preload-971041)       <source network='mk-no-preload-971041'/>
	I0719 19:25:13.714680   65336 main.go:141] libmachine: (no-preload-971041)       <model type='virtio'/>
	I0719 19:25:13.714687   65336 main.go:141] libmachine: (no-preload-971041)     </interface>
	I0719 19:25:13.714699   65336 main.go:141] libmachine: (no-preload-971041)     <interface type='network'>
	I0719 19:25:13.714709   65336 main.go:141] libmachine: (no-preload-971041)       <source network='default'/>
	I0719 19:25:13.714721   65336 main.go:141] libmachine: (no-preload-971041)       <model type='virtio'/>
	I0719 19:25:13.714728   65336 main.go:141] libmachine: (no-preload-971041)     </interface>
	I0719 19:25:13.714735   65336 main.go:141] libmachine: (no-preload-971041)     <serial type='pty'>
	I0719 19:25:13.714759   65336 main.go:141] libmachine: (no-preload-971041)       <target port='0'/>
	I0719 19:25:13.714776   65336 main.go:141] libmachine: (no-preload-971041)     </serial>
	I0719 19:25:13.714792   65336 main.go:141] libmachine: (no-preload-971041)     <console type='pty'>
	I0719 19:25:13.714808   65336 main.go:141] libmachine: (no-preload-971041)       <target type='serial' port='0'/>
	I0719 19:25:13.714819   65336 main.go:141] libmachine: (no-preload-971041)     </console>
	I0719 19:25:13.714827   65336 main.go:141] libmachine: (no-preload-971041)     <rng model='virtio'>
	I0719 19:25:13.714838   65336 main.go:141] libmachine: (no-preload-971041)       <backend model='random'>/dev/random</backend>
	I0719 19:25:13.714846   65336 main.go:141] libmachine: (no-preload-971041)     </rng>
	I0719 19:25:13.714854   65336 main.go:141] libmachine: (no-preload-971041)     
	I0719 19:25:13.714863   65336 main.go:141] libmachine: (no-preload-971041)     
	I0719 19:25:13.714873   65336 main.go:141] libmachine: (no-preload-971041)   </devices>
	I0719 19:25:13.714885   65336 main.go:141] libmachine: (no-preload-971041) </domain>
	I0719 19:25:13.714898   65336 main.go:141] libmachine: (no-preload-971041) 
	I0719 19:25:13.718989   65336 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:3e:2a:6f in network default
	I0719 19:25:13.719521   65336 main.go:141] libmachine: (no-preload-971041) Ensuring networks are active...
	I0719 19:25:13.719549   65336 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:25:13.720231   65336 main.go:141] libmachine: (no-preload-971041) Ensuring network default is active
	I0719 19:25:13.720548   65336 main.go:141] libmachine: (no-preload-971041) Ensuring network mk-no-preload-971041 is active
	I0719 19:25:13.721158   65336 main.go:141] libmachine: (no-preload-971041) Getting domain xml...
	I0719 19:25:13.721833   65336 main.go:141] libmachine: (no-preload-971041) Creating domain...
	I0719 19:25:15.214760   65336 main.go:141] libmachine: (no-preload-971041) Waiting to get IP...
	I0719 19:25:15.215812   65336 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:25:15.216322   65336 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:25:15.216344   65336 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:25:15.216245   65444 retry.go:31] will retry after 212.042592ms: waiting for machine to come up
	I0719 19:25:15.429886   65336 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:25:15.430575   65336 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:25:15.430612   65336 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:25:15.430526   65444 retry.go:31] will retry after 267.069272ms: waiting for machine to come up
	I0719 19:25:15.699127   65336 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:25:15.699691   65336 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:25:15.699715   65336 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:25:15.699630   65444 retry.go:31] will retry after 308.251531ms: waiting for machine to come up
	I0719 19:25:16.009093   65336 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:25:16.009765   65336 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:25:16.009802   65336 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:25:16.009681   65444 retry.go:31] will retry after 428.392998ms: waiting for machine to come up
	I0719 19:25:14.624823   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetIP
	I0719 19:25:14.628090   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:14.628501   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:25:06 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:25:14.628524   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:14.628748   65113 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 19:25:14.633863   65113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:25:14.649388   65113 kubeadm.go:883] updating cluster {Name:old-k8s-version-570369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-570369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 19:25:14.649528   65113 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 19:25:14.649598   65113 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:25:14.682758   65113 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 19:25:14.682831   65113 ssh_runner.go:195] Run: which lz4
	I0719 19:25:14.687813   65113 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 19:25:14.693121   65113 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 19:25:14.693158   65113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0719 19:25:16.198992   65113 crio.go:462] duration metric: took 1.511213625s to copy over tarball
	I0719 19:25:16.199077   65113 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 19:25:13.645659   65498 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 19:25:13.645702   65498 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 19:25:13.645715   65498 cache.go:56] Caching tarball of preloaded images
	I0719 19:25:13.645813   65498 preload.go:172] Found /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 19:25:13.645828   65498 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 19:25:13.645978   65498 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/pause-666146/config.json ...
	I0719 19:25:13.646260   65498 start.go:360] acquireMachinesLock for pause-666146: {Name:mk45aeee801587237bb965fa260501e9fac6f233 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 19:25:17.815890   64960 api_server.go:279] https://192.168.61.58:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:25:17.815925   64960 api_server.go:103] status: https://192.168.61.58:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:25:17.815946   64960 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8443/healthz ...
	I0719 19:25:17.865933   64960 api_server.go:279] https://192.168.61.58:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:25:17.865968   64960 api_server.go:103] status: https://192.168.61.58:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:25:17.973199   64960 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8443/healthz ...
	I0719 19:25:18.008787   64960 api_server.go:279] https://192.168.61.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:25:18.008821   64960 api_server.go:103] status: https://192.168.61.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:25:18.473025   64960 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8443/healthz ...
	I0719 19:25:18.478718   64960 api_server.go:279] https://192.168.61.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:25:18.478749   64960 api_server.go:103] status: https://192.168.61.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:25:18.973014   64960 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8443/healthz ...
	I0719 19:25:18.979085   64960 api_server.go:279] https://192.168.61.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:25:18.979117   64960 api_server.go:103] status: https://192.168.61.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:25:19.473337   64960 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8443/healthz ...
	I0719 19:25:19.480976   64960 api_server.go:279] https://192.168.61.58:8443/healthz returned 200:
	ok
	I0719 19:25:19.490465   64960 api_server.go:141] control plane version: v1.31.0-beta.0
	I0719 19:25:19.490500   64960 api_server.go:131] duration metric: took 4.518241298s to wait for apiserver health ...
	I0719 19:25:19.490520   64960 cni.go:84] Creating CNI manager for ""
	I0719 19:25:19.490529   64960 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:25:19.588345   64960 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 19:25:19.759856   64960 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 19:25:19.774013   64960 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 19:25:19.795599   64960 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:25:19.808692   64960 system_pods.go:59] 8 kube-system pods found
	I0719 19:25:19.808730   64960 system_pods.go:61] "coredns-5cfdc65f69-2dpw5" [197d443b-5033-44fe-a1d7-939f5acd60b6] Running
	I0719 19:25:19.808741   64960 system_pods.go:61] "coredns-5cfdc65f69-hc2q6" [f74cc097-e257-424f-829f-985cc52b4540] Running
	I0719 19:25:19.808747   64960 system_pods.go:61] "etcd-kubernetes-upgrade-821162" [ac47e1be-87c0-4613-ab43-4341e7ed1ed9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 19:25:19.808755   64960 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-821162" [6cf86e57-f519-4c3e-928f-edf4ac2ba3c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 19:25:19.808767   64960 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-821162" [2028d7b0-06d3-40bf-b48a-f69db5cad165] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 19:25:19.808777   64960 system_pods.go:61] "kube-proxy-jmgz9" [6977e800-7ea0-4306-9390-674f30c19d7a] Running
	I0719 19:25:19.808785   64960 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-821162" [d4eb4f6c-f84d-4826-bf3a-e89052f155ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0719 19:25:19.808792   64960 system_pods.go:61] "storage-provisioner" [02810e57-5b03-4b0f-83a4-56623e9a0088] Running
	I0719 19:25:19.808802   64960 system_pods.go:74] duration metric: took 13.177857ms to wait for pod list to return data ...
	I0719 19:25:19.808813   64960 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:25:19.815332   64960 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:25:19.815358   64960 node_conditions.go:123] node cpu capacity is 2
	I0719 19:25:19.815367   64960 node_conditions.go:105] duration metric: took 6.550137ms to run NodePressure ...
	I0719 19:25:19.815386   64960 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:25:16.439485   65336 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:25:16.440354   65336 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:25:16.440396   65336 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:25:16.440214   65444 retry.go:31] will retry after 611.947296ms: waiting for machine to come up
	I0719 19:25:17.054100   65336 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:25:17.054737   65336 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:25:17.054771   65336 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:25:17.054698   65444 retry.go:31] will retry after 938.58282ms: waiting for machine to come up
	I0719 19:25:17.995557   65336 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:25:17.996088   65336 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:25:17.996116   65336 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:25:17.996043   65444 retry.go:31] will retry after 1.058560737s: waiting for machine to come up
	I0719 19:25:19.056762   65336 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:25:19.057370   65336 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:25:19.057402   65336 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:25:19.057312   65444 retry.go:31] will retry after 1.000451218s: waiting for machine to come up
	I0719 19:25:20.059056   65336 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:25:20.059561   65336 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:25:20.059587   65336 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:25:20.059516   65444 retry.go:31] will retry after 1.358544229s: waiting for machine to come up
	I0719 19:25:21.419924   65336 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:25:21.420434   65336 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:25:21.420462   65336 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:25:21.420379   65444 retry.go:31] will retry after 2.047363624s: waiting for machine to come up
	I0719 19:25:19.116871   65113 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.917754749s)
	I0719 19:25:19.116909   65113 crio.go:469] duration metric: took 2.917883315s to extract the tarball
	I0719 19:25:19.116918   65113 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 19:25:19.158429   65113 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:25:19.201773   65113 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 19:25:19.201798   65113 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 19:25:19.201872   65113 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:25:19.201893   65113 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:25:19.201892   65113 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:25:19.201943   65113 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0719 19:25:19.201876   65113 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:25:19.202142   65113 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0719 19:25:19.202114   65113 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0719 19:25:19.201876   65113 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:25:19.203202   65113 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:25:19.203246   65113 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:25:19.203918   65113 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0719 19:25:19.203927   65113 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0719 19:25:19.203936   65113 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:25:19.203938   65113 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0719 19:25:19.204002   65113 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:25:19.204003   65113 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:25:19.431350   65113 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0719 19:25:19.468154   65113 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:25:19.472136   65113 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:25:19.472832   65113 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0719 19:25:19.472886   65113 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0719 19:25:19.472936   65113 ssh_runner.go:195] Run: which crictl
	I0719 19:25:19.474064   65113 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0719 19:25:19.488243   65113 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0719 19:25:19.488498   65113 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:25:19.495067   65113 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:25:19.630243   65113 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0719 19:25:19.630281   65113 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:25:19.630333   65113 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0719 19:25:19.630373   65113 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:25:19.630421   65113 ssh_runner.go:195] Run: which crictl
	I0719 19:25:19.630438   65113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0719 19:25:19.630345   65113 ssh_runner.go:195] Run: which crictl
	I0719 19:25:19.641299   65113 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0719 19:25:19.641337   65113 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0719 19:25:19.641377   65113 ssh_runner.go:195] Run: which crictl
	I0719 19:25:19.641429   65113 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0719 19:25:19.641467   65113 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0719 19:25:19.641479   65113 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0719 19:25:19.641502   65113 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0719 19:25:19.641507   65113 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:25:19.641511   65113 ssh_runner.go:195] Run: which crictl
	I0719 19:25:19.641525   65113 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:25:19.641544   65113 ssh_runner.go:195] Run: which crictl
	I0719 19:25:19.641554   65113 ssh_runner.go:195] Run: which crictl
	I0719 19:25:19.672961   65113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:25:19.673019   65113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:25:19.673044   65113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0719 19:25:19.673084   65113 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0719 19:25:19.673142   65113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:25:19.673149   65113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0719 19:25:19.673202   65113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:25:19.797213   65113 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0719 19:25:19.797266   65113 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0719 19:25:19.797314   65113 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0719 19:25:19.797373   65113 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0719 19:25:19.797398   65113 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0719 19:25:19.797448   65113 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0719 19:25:20.486805   65113 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:25:20.623457   65113 cache_images.go:92] duration metric: took 1.421643712s to LoadCachedImages
	W0719 19:25:20.623561   65113 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0719 19:25:20.623581   65113 kubeadm.go:934] updating node { 192.168.39.220 8443 v1.20.0 crio true true} ...
	I0719 19:25:20.623709   65113 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-570369 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 19:25:20.623772   65113 ssh_runner.go:195] Run: crio config
	I0719 19:25:20.675456   65113 cni.go:84] Creating CNI manager for ""
	I0719 19:25:20.675478   65113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:25:20.675486   65113 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 19:25:20.675508   65113 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.220 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-570369 NodeName:old-k8s-version-570369 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.220"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.220 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0719 19:25:20.675643   65113 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.220
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-570369"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.220
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.220"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 19:25:20.675701   65113 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0719 19:25:20.686000   65113 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 19:25:20.686071   65113 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 19:25:20.696066   65113 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0719 19:25:20.712140   65113 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 19:25:20.727498   65113 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0719 19:25:20.743954   65113 ssh_runner.go:195] Run: grep 192.168.39.220	control-plane.minikube.internal$ /etc/hosts
	I0719 19:25:20.747396   65113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.220	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:25:20.758968   65113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:25:20.889758   65113 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:25:20.905948   65113 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369 for IP: 192.168.39.220
	I0719 19:25:20.905964   65113 certs.go:194] generating shared ca certs ...
	I0719 19:25:20.905978   65113 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:25:20.906119   65113 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 19:25:20.906155   65113 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 19:25:20.906164   65113 certs.go:256] generating profile certs ...
	I0719 19:25:20.906214   65113 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/client.key
	I0719 19:25:20.906232   65113 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/client.crt with IP's: []
	I0719 19:25:21.039126   65113 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/client.crt ...
	I0719 19:25:21.039158   65113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/client.crt: {Name:mk0c0717927b787bb9a6c29182f44712ce9fe3ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:25:21.039374   65113 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/client.key ...
	I0719 19:25:21.039398   65113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/client.key: {Name:mkcff3a4be82bcd0ab02bb354461057bce059fa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:25:21.039529   65113 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.key.41bb110d
	I0719 19:25:21.039554   65113 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.crt.41bb110d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.220]
	I0719 19:25:21.186701   65113 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.crt.41bb110d ...
	I0719 19:25:21.186739   65113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.crt.41bb110d: {Name:mkadcc6bb9c4c7be3c012cf09af260a515801ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:25:21.186932   65113 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.key.41bb110d ...
	I0719 19:25:21.186962   65113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.key.41bb110d: {Name:mkbfaee0b7d4f20fc1b2d4ab30d4281e256b705e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:25:21.187060   65113 certs.go:381] copying /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.crt.41bb110d -> /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.crt
	I0719 19:25:21.187158   65113 certs.go:385] copying /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.key.41bb110d -> /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.key
	I0719 19:25:21.187239   65113 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/proxy-client.key
	I0719 19:25:21.187263   65113 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/proxy-client.crt with IP's: []
	I0719 19:25:21.350294   65113 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/proxy-client.crt ...
	I0719 19:25:21.350324   65113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/proxy-client.crt: {Name:mkff46854d4c04b2afbe102ad3d53582d31fac2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:25:21.350481   65113 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/proxy-client.key ...
	I0719 19:25:21.350493   65113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/proxy-client.key: {Name:mk45f7518ac8f2682dcf8a6b5556cceea6e85d84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:25:21.350655   65113 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 19:25:21.350692   65113 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 19:25:21.350701   65113 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 19:25:21.350726   65113 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 19:25:21.350767   65113 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 19:25:21.350797   65113 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 19:25:21.350834   65113 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:25:21.351424   65113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 19:25:21.376565   65113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 19:25:21.401761   65113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 19:25:21.426108   65113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 19:25:21.449274   65113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0719 19:25:21.472921   65113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 19:25:21.496418   65113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 19:25:21.547974   65113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 19:25:21.572554   65113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 19:25:21.596778   65113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 19:25:21.619518   65113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 19:25:21.642300   65113 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 19:25:21.658505   65113 ssh_runner.go:195] Run: openssl version
	I0719 19:25:21.664365   65113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 19:25:21.674743   65113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 19:25:21.678951   65113 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 19:25:21.679007   65113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 19:25:21.684625   65113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 19:25:21.694878   65113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 19:25:21.705292   65113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 19:25:21.709676   65113 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 19:25:21.709735   65113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 19:25:21.715061   65113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 19:25:21.725472   65113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 19:25:21.735799   65113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:25:21.739954   65113 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:25:21.740013   65113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:25:21.745450   65113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 19:25:21.756026   65113 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 19:25:21.759745   65113 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 19:25:21.759819   65113 kubeadm.go:392] StartCluster: {Name:old-k8s-version-570369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-570369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:25:21.759920   65113 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 19:25:21.759980   65113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:25:22.143506   64960 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (2.328086354s)
	I0719 19:25:22.143598   64960 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 19:25:22.157591   64960 ops.go:34] apiserver oom_adj: -16
	I0719 19:25:22.157615   64960 kubeadm.go:597] duration metric: took 20.256303897s to restartPrimaryControlPlane
	I0719 19:25:22.157627   64960 kubeadm.go:394] duration metric: took 20.754154929s to StartCluster
	I0719 19:25:22.157648   64960 settings.go:142] acquiring lock: {Name:mkcf95271f2ac91a55c2f4ae0f8a0ccaa24777be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:25:22.157741   64960 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:25:22.158894   64960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/kubeconfig: {Name:mk0bbb5f0de0e816a151363ad4427bbf9de52f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:25:22.159152   64960 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.58 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 19:25:22.159209   64960 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 19:25:22.159277   64960 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-821162"
	I0719 19:25:22.159311   64960 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-821162"
	W0719 19:25:22.159322   64960 addons.go:243] addon storage-provisioner should already be in state true
	I0719 19:25:22.159351   64960 host.go:66] Checking if "kubernetes-upgrade-821162" exists ...
	I0719 19:25:22.159375   64960 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-821162"
	I0719 19:25:22.159412   64960 config.go:182] Loaded profile config "kubernetes-upgrade-821162": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 19:25:22.159431   64960 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-821162"
	I0719 19:25:22.159801   64960 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:25:22.159817   64960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:25:22.159882   64960 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:25:22.159924   64960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:25:22.160652   64960 out.go:177] * Verifying Kubernetes components...
	I0719 19:25:22.162165   64960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:25:22.177218   64960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38827
	I0719 19:25:22.177809   64960 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:25:22.178405   64960 main.go:141] libmachine: Using API Version  1
	I0719 19:25:22.178435   64960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:25:22.178836   64960 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:25:22.179109   64960 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetState
	I0719 19:25:22.179965   64960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46141
	I0719 19:25:22.180483   64960 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:25:22.180980   64960 main.go:141] libmachine: Using API Version  1
	I0719 19:25:22.180998   64960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:25:22.181385   64960 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:25:22.181934   64960 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:25:22.181977   64960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:25:22.182249   64960 kapi.go:59] client config for kubernetes-upgrade-821162: &rest.Config{Host:"https://192.168.61.58:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kubernetes-upgrade-821162/client.crt", KeyFile:"/home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kubernetes-upgrade-821162/client.key", CAFile:"/home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 19:25:22.182561   64960 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-821162"
	W0719 19:25:22.182574   64960 addons.go:243] addon default-storageclass should already be in state true
	I0719 19:25:22.182600   64960 host.go:66] Checking if "kubernetes-upgrade-821162" exists ...
	I0719 19:25:22.182965   64960 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:25:22.182991   64960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:25:22.201913   64960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34877
	I0719 19:25:22.202107   64960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41729
	I0719 19:25:22.202352   64960 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:25:22.202780   64960 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:25:22.202825   64960 main.go:141] libmachine: Using API Version  1
	I0719 19:25:22.202839   64960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:25:22.203104   64960 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:25:22.203198   64960 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetState
	I0719 19:25:22.203245   64960 main.go:141] libmachine: Using API Version  1
	I0719 19:25:22.203259   64960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:25:22.203676   64960 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:25:22.204227   64960 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:25:22.204248   64960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:25:22.205792   64960 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .DriverName
	I0719 19:25:22.208000   64960 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:25:21.798868   65113 cri.go:89] found id: ""
	I0719 19:25:21.798963   65113 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 19:25:21.808723   65113 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:25:21.820804   65113 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:25:21.830669   65113 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:25:21.830691   65113 kubeadm.go:157] found existing configuration files:
	
	I0719 19:25:21.830743   65113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:25:21.839685   65113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:25:21.839762   65113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:25:21.849166   65113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:25:21.858491   65113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:25:21.858541   65113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:25:21.871326   65113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:25:21.881930   65113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:25:21.881990   65113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:25:21.897810   65113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:25:21.907889   65113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:25:21.907940   65113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:25:21.919872   65113 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 19:25:22.042825   65113 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0719 19:25:22.042907   65113 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 19:25:22.200052   65113 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 19:25:22.200195   65113 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 19:25:22.200314   65113 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 19:25:22.419955   65113 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 19:25:22.209356   64960 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:25:22.209374   64960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 19:25:22.209391   64960 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHHostname
	I0719 19:25:22.212839   64960 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:25:22.213303   64960 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:93:d8", ip: ""} in network mk-kubernetes-upgrade-821162: {Iface:virbr1 ExpiryTime:2024-07-19 20:24:03 +0000 UTC Type:0 Mac:52:54:00:2d:93:d8 Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:kubernetes-upgrade-821162 Clientid:01:52:54:00:2d:93:d8}
	I0719 19:25:22.213334   64960 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined IP address 192.168.61.58 and MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:25:22.213635   64960 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHPort
	I0719 19:25:22.216059   64960 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHKeyPath
	I0719 19:25:22.216229   64960 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHUsername
	I0719 19:25:22.216387   64960 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/kubernetes-upgrade-821162/id_rsa Username:docker}
	I0719 19:25:22.225153   64960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44075
	I0719 19:25:22.225702   64960 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:25:22.226215   64960 main.go:141] libmachine: Using API Version  1
	I0719 19:25:22.226233   64960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:25:22.226646   64960 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:25:22.226846   64960 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetState
	I0719 19:25:22.228766   64960 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .DriverName
	I0719 19:25:22.229039   64960 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 19:25:22.229053   64960 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 19:25:22.229072   64960 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHHostname
	I0719 19:25:22.232562   64960 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:25:22.233317   64960 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:93:d8", ip: ""} in network mk-kubernetes-upgrade-821162: {Iface:virbr1 ExpiryTime:2024-07-19 20:24:03 +0000 UTC Type:0 Mac:52:54:00:2d:93:d8 Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:kubernetes-upgrade-821162 Clientid:01:52:54:00:2d:93:d8}
	I0719 19:25:22.233350   64960 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | domain kubernetes-upgrade-821162 has defined IP address 192.168.61.58 and MAC address 52:54:00:2d:93:d8 in network mk-kubernetes-upgrade-821162
	I0719 19:25:22.233555   64960 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHPort
	I0719 19:25:22.233795   64960 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHKeyPath
	I0719 19:25:22.234010   64960 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .GetSSHUsername
	I0719 19:25:22.234196   64960 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/kubernetes-upgrade-821162/id_rsa Username:docker}
	I0719 19:25:22.429837   64960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:25:22.452649   64960 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:25:22.452732   64960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:25:22.474850   64960 api_server.go:72] duration metric: took 315.661211ms to wait for apiserver process to appear ...
	I0719 19:25:22.474880   64960 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:25:22.474901   64960 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8443/healthz ...
	I0719 19:25:22.480918   64960 api_server.go:279] https://192.168.61.58:8443/healthz returned 200:
	ok
	I0719 19:25:22.481977   64960 api_server.go:141] control plane version: v1.31.0-beta.0
	I0719 19:25:22.482000   64960 api_server.go:131] duration metric: took 7.112648ms to wait for apiserver health ...
	I0719 19:25:22.482010   64960 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:25:22.489988   64960 system_pods.go:59] 8 kube-system pods found
	I0719 19:25:22.490015   64960 system_pods.go:61] "coredns-5cfdc65f69-2dpw5" [197d443b-5033-44fe-a1d7-939f5acd60b6] Running
	I0719 19:25:22.490022   64960 system_pods.go:61] "coredns-5cfdc65f69-hc2q6" [f74cc097-e257-424f-829f-985cc52b4540] Running
	I0719 19:25:22.490031   64960 system_pods.go:61] "etcd-kubernetes-upgrade-821162" [ac47e1be-87c0-4613-ab43-4341e7ed1ed9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 19:25:22.490037   64960 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-821162" [6cf86e57-f519-4c3e-928f-edf4ac2ba3c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 19:25:22.490057   64960 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-821162" [2028d7b0-06d3-40bf-b48a-f69db5cad165] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 19:25:22.490068   64960 system_pods.go:61] "kube-proxy-jmgz9" [6977e800-7ea0-4306-9390-674f30c19d7a] Running
	I0719 19:25:22.490075   64960 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-821162" [d4eb4f6c-f84d-4826-bf3a-e89052f155ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0719 19:25:22.490083   64960 system_pods.go:61] "storage-provisioner" [02810e57-5b03-4b0f-83a4-56623e9a0088] Running
	I0719 19:25:22.490094   64960 system_pods.go:74] duration metric: took 8.077041ms to wait for pod list to return data ...
	I0719 19:25:22.490108   64960 kubeadm.go:582] duration metric: took 330.924112ms to wait for: map[apiserver:true system_pods:true]
	I0719 19:25:22.490123   64960 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:25:22.492884   64960 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:25:22.492922   64960 node_conditions.go:123] node cpu capacity is 2
	I0719 19:25:22.492933   64960 node_conditions.go:105] duration metric: took 2.804831ms to run NodePressure ...
	I0719 19:25:22.492944   64960 start.go:241] waiting for startup goroutines ...
	I0719 19:25:22.578704   64960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:25:22.587545   64960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 19:25:23.439713   64960 main.go:141] libmachine: Making call to close driver server
	I0719 19:25:23.439903   64960 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .Close
	I0719 19:25:23.439877   64960 main.go:141] libmachine: Making call to close driver server
	I0719 19:25:23.440009   64960 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .Close
	I0719 19:25:23.441906   64960 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:25:23.441975   64960 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:25:23.441981   64960 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | Closing plugin on server side
	I0719 19:25:23.441948   64960 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | Closing plugin on server side
	I0719 19:25:23.441997   64960 main.go:141] libmachine: Making call to close driver server
	I0719 19:25:23.442031   64960 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .Close
	I0719 19:25:23.442256   64960 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:25:23.442350   64960 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:25:23.442371   64960 main.go:141] libmachine: Making call to close driver server
	I0719 19:25:23.442389   64960 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .Close
	I0719 19:25:23.442305   64960 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:25:23.442575   64960 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:25:23.442723   64960 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | Closing plugin on server side
	I0719 19:25:23.442325   64960 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | Closing plugin on server side
	I0719 19:25:23.442837   64960 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:25:23.442861   64960 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:25:23.451348   64960 main.go:141] libmachine: Making call to close driver server
	I0719 19:25:23.451370   64960 main.go:141] libmachine: (kubernetes-upgrade-821162) Calling .Close
	I0719 19:25:23.451741   64960 main.go:141] libmachine: (kubernetes-upgrade-821162) DBG | Closing plugin on server side
	I0719 19:25:23.451798   64960 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:25:23.451808   64960 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:25:23.454461   64960 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0719 19:25:23.455819   64960 addons.go:510] duration metric: took 1.296607545s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0719 19:25:23.455871   64960 start.go:246] waiting for cluster config update ...
	I0719 19:25:23.455886   64960 start.go:255] writing updated cluster config ...
	I0719 19:25:23.456189   64960 ssh_runner.go:195] Run: rm -f paused
	I0719 19:25:23.521382   64960 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0719 19:25:23.523193   64960 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-821162" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 19 19:25:24 kubernetes-upgrade-821162 crio[2295]: time="2024-07-19 19:25:24.325164561Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721417124325134707,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5b14fa2c-7866-4934-8cd1-6027e1ac85bd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:25:24 kubernetes-upgrade-821162 crio[2295]: time="2024-07-19 19:25:24.325942077Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1f22d56a-d3a9-430e-baab-ef8cfedc064f name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:25:24 kubernetes-upgrade-821162 crio[2295]: time="2024-07-19 19:25:24.326019004Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1f22d56a-d3a9-430e-baab-ef8cfedc064f name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:25:24 kubernetes-upgrade-821162 crio[2295]: time="2024-07-19 19:25:24.327132518Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cfa1aa480be1282429915613ebd7a2d8a1856d67c22838cce2774d225c79a7ca,PodSandboxId:c8633ad2c391997dc5bb7650335c64c21abb25e0ff2e394197456f05d8ecdb80,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721417114469390791,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-821162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f885c8d8522f6aea32f981e154240da8,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2553bab8c33db29ad8e78e2f3c589577b90cc3328551ed4df600550154caf8e3,PodSandboxId:e324d69fdb74b6b7bf5c8ebba4b5ae349ecf6c4e17cfa944c9b325e78b879022,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721417114454559055,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-821162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85506705e86bfcc0f2d8988dad06a7d1,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.contai
ner.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f52d76b784495fe69c09061d23d94598084c186d0e39a23ecd23bac16fb8d70,PodSandboxId:a7d54f06e2246acf8387306fefd79c63333743bf39419e3fe0d5118ea4f3e29b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721417114449284994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-821162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2e06ef5725540b092711d4479d77670,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.r
estartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a23efc0775be97a0d4cccf7c7b36ae19d4ec228e2960967c0ea7de8e42a8f4d,PodSandboxId:548cf2ba3125443d51d2255d4640b51ed3dd39a9d5acfb53e35c3eb88fd1745a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721417114436282172,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-821162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7c3533c83d5f95cc2660717523ae46,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d72e9296c098e2f2d4c0462c18b05f7cc60e924cce4517c4ab31d9c826f1cc03,PodSandboxId:5cbc5637831f7742cd8e42cd4fca717544596aa84ef73e85af64f2667b0acbff,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721417102186705271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-hc2q6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74cc097-e257-424f-829f-985cc52b4540,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7ba6c75f66e205cd5c0756138bbafaf09207e24352fdc1fbe45e8a5e6adb036,PodSandboxId:4d76416c0a9f695c5af57f0597d4741f0869edde801cd8ee56855c1232e925f8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721417102091893778,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-2dpw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 197d4
43b-5033-44fe-a1d7-939f5acd60b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f4e7d712e6d6af8d321d596f5af869f2f4b365b1009529cdd5014fb0115da09,PodSandboxId:1b1c8e73bc18507769ea4626ce28fbd1d68631117c2c06b11bf75ba4d06debd8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,Crea
tedAt:1721417101372207029,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02810e57-5b03-4b0f-83a4-56623e9a0088,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff82929033a30ab65729bac4c9566287d84afd303444d7e387bda36b442f6f28,PodSandboxId:842241b6a1dd1f8669befd58f1cd56bbae1d8425ae89a463f7352e380c6819ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:172141710140356
1323,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jmgz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6977e800-7ea0-4306-9390-674f30c19d7a,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c995a95003e5bd6fb33c749b03b501a8274f45a1c5847fc2f9481377f8c5898c,PodSandboxId:a7d54f06e2246acf8387306fefd79c63333743bf39419e3fe0d5118ea4f3e29b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721417101337290421,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-821162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2e06ef5725540b092711d4479d77670,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:782b4ec714830c622aa8fc1de76b5a449990a0131e0af90161f96b3e9e6b5cb5,PodSandboxId:c8633ad2c391997dc5bb7650335c64c21abb25e0ff2e394197456f05d8ecdb80,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721417101357064834,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-821162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f885c8d8522f6aea32f981e154240da8,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6cf9f513c5aa0e6536eed0e9ddbd3baf433290e1cadef3b8972759b920e67a5,PodSandboxId:548cf2ba3125443d51d2255d4640b51ed3dd39a9d5acfb53e35c3eb88fd1745a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721417101235540526,Labels:map[string]string{io.kubernetes.containe
r.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-821162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7c3533c83d5f95cc2660717523ae46,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:802d81d6a0406907b1a09ee4ee2a6aa330d35f6fc9949e0116ce0ef43354c55c,PodSandboxId:e324d69fdb74b6b7bf5c8ebba4b5ae349ecf6c4e17cfa944c9b325e78b879022,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1721417101262379541,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-821162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85506705e86bfcc0f2d8988dad06a7d1,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:705f767a43ac3b184f0c52f9a8dd5ef5b522a77c553045cd0536b78030c787c2,PodSandboxId:bd5db59c12adb7e2e0e5b82e9019236d0d3aff45435cad47d0cac77c38dabc79,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721417073980927007,Labels:map[string]string{io.kubernetes.container.name: cored
ns,io.kubernetes.pod.name: coredns-5cfdc65f69-hc2q6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74cc097-e257-424f-829f-985cc52b4540,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0c518b2143da72280d1024ce044d00b0434cbe8863b75dcca3522bd2e7fbe9d,PodSandboxId:7567eb1db7ba3b86644e91cb2d64f28a5fa44ab97a0a94fa17c1e54ba1045760,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721417073941064457,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-2dpw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 197d443b-5033-44fe-a1d7-939f5acd60b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d91559cc099e6bddbbb42189960b9aa49f4f070791393d54f2543b708cef05f4,PodSandboxId:b8a1781f7a113dcc2f9826600517cf22557815a0e294ca8e121b4f450eb38f1a,Metadata:&
ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721417072461699949,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jmgz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6977e800-7ea0-4306-9390-674f30c19d7a,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cdac3472a1b5cba1771f85cf9d3e03fc0c9db1c3832e7287b8f693919861f7e,PodSandboxId:4479e47c5325ae8349a796286b704b794036665b590fea98a4ad676e68f5a6a9,Metadata:&ContainerMetadata{Name:storage-pro
visioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721417072393254698,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02810e57-5b03-4b0f-83a4-56623e9a0088,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1f22d56a-d3a9-430e-baab-ef8cfedc064f name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:25:24 kubernetes-upgrade-821162 crio[2295]: time="2024-07-19 19:25:24.374003246Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=853dcd00-ff9d-4fdb-9817-29b9055dfd96 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:25:24 kubernetes-upgrade-821162 crio[2295]: time="2024-07-19 19:25:24.374321411Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=853dcd00-ff9d-4fdb-9817-29b9055dfd96 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:25:24 kubernetes-upgrade-821162 crio[2295]: time="2024-07-19 19:25:24.375406022Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d26a9519-5fe1-4001-898c-8885d9f204f2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:25:24 kubernetes-upgrade-821162 crio[2295]: time="2024-07-19 19:25:24.375805890Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721417124375782896,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d26a9519-5fe1-4001-898c-8885d9f204f2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:25:24 kubernetes-upgrade-821162 crio[2295]: time="2024-07-19 19:25:24.376509837Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=52a7e802-693b-4ee9-8aa7-6154fd1be89f name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:25:24 kubernetes-upgrade-821162 crio[2295]: time="2024-07-19 19:25:24.376575237Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=52a7e802-693b-4ee9-8aa7-6154fd1be89f name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:25:24 kubernetes-upgrade-821162 crio[2295]: time="2024-07-19 19:25:24.376922059Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cfa1aa480be1282429915613ebd7a2d8a1856d67c22838cce2774d225c79a7ca,PodSandboxId:c8633ad2c391997dc5bb7650335c64c21abb25e0ff2e394197456f05d8ecdb80,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721417114469390791,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-821162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f885c8d8522f6aea32f981e154240da8,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2553bab8c33db29ad8e78e2f3c589577b90cc3328551ed4df600550154caf8e3,PodSandboxId:e324d69fdb74b6b7bf5c8ebba4b5ae349ecf6c4e17cfa944c9b325e78b879022,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721417114454559055,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-821162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85506705e86bfcc0f2d8988dad06a7d1,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.contai
ner.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f52d76b784495fe69c09061d23d94598084c186d0e39a23ecd23bac16fb8d70,PodSandboxId:a7d54f06e2246acf8387306fefd79c63333743bf39419e3fe0d5118ea4f3e29b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721417114449284994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-821162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2e06ef5725540b092711d4479d77670,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.r
estartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a23efc0775be97a0d4cccf7c7b36ae19d4ec228e2960967c0ea7de8e42a8f4d,PodSandboxId:548cf2ba3125443d51d2255d4640b51ed3dd39a9d5acfb53e35c3eb88fd1745a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721417114436282172,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-821162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7c3533c83d5f95cc2660717523ae46,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d72e9296c098e2f2d4c0462c18b05f7cc60e924cce4517c4ab31d9c826f1cc03,PodSandboxId:5cbc5637831f7742cd8e42cd4fca717544596aa84ef73e85af64f2667b0acbff,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721417102186705271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-hc2q6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74cc097-e257-424f-829f-985cc52b4540,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7ba6c75f66e205cd5c0756138bbafaf09207e24352fdc1fbe45e8a5e6adb036,PodSandboxId:4d76416c0a9f695c5af57f0597d4741f0869edde801cd8ee56855c1232e925f8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721417102091893778,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-2dpw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 197d4
43b-5033-44fe-a1d7-939f5acd60b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f4e7d712e6d6af8d321d596f5af869f2f4b365b1009529cdd5014fb0115da09,PodSandboxId:1b1c8e73bc18507769ea4626ce28fbd1d68631117c2c06b11bf75ba4d06debd8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,Crea
tedAt:1721417101372207029,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02810e57-5b03-4b0f-83a4-56623e9a0088,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff82929033a30ab65729bac4c9566287d84afd303444d7e387bda36b442f6f28,PodSandboxId:842241b6a1dd1f8669befd58f1cd56bbae1d8425ae89a463f7352e380c6819ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:172141710140356
1323,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jmgz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6977e800-7ea0-4306-9390-674f30c19d7a,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c995a95003e5bd6fb33c749b03b501a8274f45a1c5847fc2f9481377f8c5898c,PodSandboxId:a7d54f06e2246acf8387306fefd79c63333743bf39419e3fe0d5118ea4f3e29b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721417101337290421,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-821162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2e06ef5725540b092711d4479d77670,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:782b4ec714830c622aa8fc1de76b5a449990a0131e0af90161f96b3e9e6b5cb5,PodSandboxId:c8633ad2c391997dc5bb7650335c64c21abb25e0ff2e394197456f05d8ecdb80,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721417101357064834,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-821162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f885c8d8522f6aea32f981e154240da8,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6cf9f513c5aa0e6536eed0e9ddbd3baf433290e1cadef3b8972759b920e67a5,PodSandboxId:548cf2ba3125443d51d2255d4640b51ed3dd39a9d5acfb53e35c3eb88fd1745a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721417101235540526,Labels:map[string]string{io.kubernetes.containe
r.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-821162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7c3533c83d5f95cc2660717523ae46,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:802d81d6a0406907b1a09ee4ee2a6aa330d35f6fc9949e0116ce0ef43354c55c,PodSandboxId:e324d69fdb74b6b7bf5c8ebba4b5ae349ecf6c4e17cfa944c9b325e78b879022,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1721417101262379541,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-821162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85506705e86bfcc0f2d8988dad06a7d1,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:705f767a43ac3b184f0c52f9a8dd5ef5b522a77c553045cd0536b78030c787c2,PodSandboxId:bd5db59c12adb7e2e0e5b82e9019236d0d3aff45435cad47d0cac77c38dabc79,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721417073980927007,Labels:map[string]string{io.kubernetes.container.name: cored
ns,io.kubernetes.pod.name: coredns-5cfdc65f69-hc2q6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74cc097-e257-424f-829f-985cc52b4540,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0c518b2143da72280d1024ce044d00b0434cbe8863b75dcca3522bd2e7fbe9d,PodSandboxId:7567eb1db7ba3b86644e91cb2d64f28a5fa44ab97a0a94fa17c1e54ba1045760,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721417073941064457,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-2dpw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 197d443b-5033-44fe-a1d7-939f5acd60b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d91559cc099e6bddbbb42189960b9aa49f4f070791393d54f2543b708cef05f4,PodSandboxId:b8a1781f7a113dcc2f9826600517cf22557815a0e294ca8e121b4f450eb38f1a,Metadata:&
ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721417072461699949,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jmgz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6977e800-7ea0-4306-9390-674f30c19d7a,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cdac3472a1b5cba1771f85cf9d3e03fc0c9db1c3832e7287b8f693919861f7e,PodSandboxId:4479e47c5325ae8349a796286b704b794036665b590fea98a4ad676e68f5a6a9,Metadata:&ContainerMetadata{Name:storage-pro
visioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721417072393254698,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02810e57-5b03-4b0f-83a4-56623e9a0088,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=52a7e802-693b-4ee9-8aa7-6154fd1be89f name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:25:24 kubernetes-upgrade-821162 crio[2295]: time="2024-07-19 19:25:24.428952501Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=827721db-2b51-4d10-b5ca-801df162195c name=/runtime.v1.RuntimeService/Version
	Jul 19 19:25:24 kubernetes-upgrade-821162 crio[2295]: time="2024-07-19 19:25:24.429060008Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=827721db-2b51-4d10-b5ca-801df162195c name=/runtime.v1.RuntimeService/Version
	Jul 19 19:25:24 kubernetes-upgrade-821162 crio[2295]: time="2024-07-19 19:25:24.430321692Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d9541fb0-efc3-415e-ba64-81e90152921e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:25:24 kubernetes-upgrade-821162 crio[2295]: time="2024-07-19 19:25:24.431028475Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721417124430991140,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d9541fb0-efc3-415e-ba64-81e90152921e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:25:24 kubernetes-upgrade-821162 crio[2295]: time="2024-07-19 19:25:24.431884624Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2ced7a2d-b7d8-45ef-bc82-7a41f6136d35 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:25:24 kubernetes-upgrade-821162 crio[2295]: time="2024-07-19 19:25:24.431973054Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2ced7a2d-b7d8-45ef-bc82-7a41f6136d35 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:25:24 kubernetes-upgrade-821162 crio[2295]: time="2024-07-19 19:25:24.432436864Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cfa1aa480be1282429915613ebd7a2d8a1856d67c22838cce2774d225c79a7ca,PodSandboxId:c8633ad2c391997dc5bb7650335c64c21abb25e0ff2e394197456f05d8ecdb80,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721417114469390791,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-821162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f885c8d8522f6aea32f981e154240da8,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2553bab8c33db29ad8e78e2f3c589577b90cc3328551ed4df600550154caf8e3,PodSandboxId:e324d69fdb74b6b7bf5c8ebba4b5ae349ecf6c4e17cfa944c9b325e78b879022,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721417114454559055,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-821162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85506705e86bfcc0f2d8988dad06a7d1,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.contai
ner.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f52d76b784495fe69c09061d23d94598084c186d0e39a23ecd23bac16fb8d70,PodSandboxId:a7d54f06e2246acf8387306fefd79c63333743bf39419e3fe0d5118ea4f3e29b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721417114449284994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-821162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2e06ef5725540b092711d4479d77670,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.r
estartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a23efc0775be97a0d4cccf7c7b36ae19d4ec228e2960967c0ea7de8e42a8f4d,PodSandboxId:548cf2ba3125443d51d2255d4640b51ed3dd39a9d5acfb53e35c3eb88fd1745a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721417114436282172,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-821162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7c3533c83d5f95cc2660717523ae46,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d72e9296c098e2f2d4c0462c18b05f7cc60e924cce4517c4ab31d9c826f1cc03,PodSandboxId:5cbc5637831f7742cd8e42cd4fca717544596aa84ef73e85af64f2667b0acbff,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721417102186705271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-hc2q6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74cc097-e257-424f-829f-985cc52b4540,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7ba6c75f66e205cd5c0756138bbafaf09207e24352fdc1fbe45e8a5e6adb036,PodSandboxId:4d76416c0a9f695c5af57f0597d4741f0869edde801cd8ee56855c1232e925f8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721417102091893778,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-2dpw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 197d4
43b-5033-44fe-a1d7-939f5acd60b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f4e7d712e6d6af8d321d596f5af869f2f4b365b1009529cdd5014fb0115da09,PodSandboxId:1b1c8e73bc18507769ea4626ce28fbd1d68631117c2c06b11bf75ba4d06debd8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,Crea
tedAt:1721417101372207029,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02810e57-5b03-4b0f-83a4-56623e9a0088,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff82929033a30ab65729bac4c9566287d84afd303444d7e387bda36b442f6f28,PodSandboxId:842241b6a1dd1f8669befd58f1cd56bbae1d8425ae89a463f7352e380c6819ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:172141710140356
1323,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jmgz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6977e800-7ea0-4306-9390-674f30c19d7a,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c995a95003e5bd6fb33c749b03b501a8274f45a1c5847fc2f9481377f8c5898c,PodSandboxId:a7d54f06e2246acf8387306fefd79c63333743bf39419e3fe0d5118ea4f3e29b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721417101337290421,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-821162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2e06ef5725540b092711d4479d77670,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:782b4ec714830c622aa8fc1de76b5a449990a0131e0af90161f96b3e9e6b5cb5,PodSandboxId:c8633ad2c391997dc5bb7650335c64c21abb25e0ff2e394197456f05d8ecdb80,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721417101357064834,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-821162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f885c8d8522f6aea32f981e154240da8,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6cf9f513c5aa0e6536eed0e9ddbd3baf433290e1cadef3b8972759b920e67a5,PodSandboxId:548cf2ba3125443d51d2255d4640b51ed3dd39a9d5acfb53e35c3eb88fd1745a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721417101235540526,Labels:map[string]string{io.kubernetes.containe
r.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-821162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7c3533c83d5f95cc2660717523ae46,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:802d81d6a0406907b1a09ee4ee2a6aa330d35f6fc9949e0116ce0ef43354c55c,PodSandboxId:e324d69fdb74b6b7bf5c8ebba4b5ae349ecf6c4e17cfa944c9b325e78b879022,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1721417101262379541,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-821162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85506705e86bfcc0f2d8988dad06a7d1,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:705f767a43ac3b184f0c52f9a8dd5ef5b522a77c553045cd0536b78030c787c2,PodSandboxId:bd5db59c12adb7e2e0e5b82e9019236d0d3aff45435cad47d0cac77c38dabc79,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721417073980927007,Labels:map[string]string{io.kubernetes.container.name: cored
ns,io.kubernetes.pod.name: coredns-5cfdc65f69-hc2q6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74cc097-e257-424f-829f-985cc52b4540,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0c518b2143da72280d1024ce044d00b0434cbe8863b75dcca3522bd2e7fbe9d,PodSandboxId:7567eb1db7ba3b86644e91cb2d64f28a5fa44ab97a0a94fa17c1e54ba1045760,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721417073941064457,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-2dpw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 197d443b-5033-44fe-a1d7-939f5acd60b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d91559cc099e6bddbbb42189960b9aa49f4f070791393d54f2543b708cef05f4,PodSandboxId:b8a1781f7a113dcc2f9826600517cf22557815a0e294ca8e121b4f450eb38f1a,Metadata:&
ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721417072461699949,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jmgz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6977e800-7ea0-4306-9390-674f30c19d7a,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cdac3472a1b5cba1771f85cf9d3e03fc0c9db1c3832e7287b8f693919861f7e,PodSandboxId:4479e47c5325ae8349a796286b704b794036665b590fea98a4ad676e68f5a6a9,Metadata:&ContainerMetadata{Name:storage-pro
visioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721417072393254698,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02810e57-5b03-4b0f-83a4-56623e9a0088,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2ced7a2d-b7d8-45ef-bc82-7a41f6136d35 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:25:24 kubernetes-upgrade-821162 crio[2295]: time="2024-07-19 19:25:24.480099215Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=43d58fe9-4985-4ade-aa0e-013087c2b6b0 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:25:24 kubernetes-upgrade-821162 crio[2295]: time="2024-07-19 19:25:24.480215058Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=43d58fe9-4985-4ade-aa0e-013087c2b6b0 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:25:24 kubernetes-upgrade-821162 crio[2295]: time="2024-07-19 19:25:24.485743170Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0e26accf-7f81-4d91-8697-f2cd20d6e6d9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:25:24 kubernetes-upgrade-821162 crio[2295]: time="2024-07-19 19:25:24.486908764Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721417124486824633,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0e26accf-7f81-4d91-8697-f2cd20d6e6d9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:25:24 kubernetes-upgrade-821162 crio[2295]: time="2024-07-19 19:25:24.489111033Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d39dbb23-d4f9-4617-a53b-b297233c1b33 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:25:24 kubernetes-upgrade-821162 crio[2295]: time="2024-07-19 19:25:24.489223588Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d39dbb23-d4f9-4617-a53b-b297233c1b33 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:25:24 kubernetes-upgrade-821162 crio[2295]: time="2024-07-19 19:25:24.489935826Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cfa1aa480be1282429915613ebd7a2d8a1856d67c22838cce2774d225c79a7ca,PodSandboxId:c8633ad2c391997dc5bb7650335c64c21abb25e0ff2e394197456f05d8ecdb80,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721417114469390791,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-821162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f885c8d8522f6aea32f981e154240da8,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2553bab8c33db29ad8e78e2f3c589577b90cc3328551ed4df600550154caf8e3,PodSandboxId:e324d69fdb74b6b7bf5c8ebba4b5ae349ecf6c4e17cfa944c9b325e78b879022,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721417114454559055,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-821162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85506705e86bfcc0f2d8988dad06a7d1,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.contai
ner.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f52d76b784495fe69c09061d23d94598084c186d0e39a23ecd23bac16fb8d70,PodSandboxId:a7d54f06e2246acf8387306fefd79c63333743bf39419e3fe0d5118ea4f3e29b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721417114449284994,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-821162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2e06ef5725540b092711d4479d77670,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.r
estartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a23efc0775be97a0d4cccf7c7b36ae19d4ec228e2960967c0ea7de8e42a8f4d,PodSandboxId:548cf2ba3125443d51d2255d4640b51ed3dd39a9d5acfb53e35c3eb88fd1745a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721417114436282172,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-821162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7c3533c83d5f95cc2660717523ae46,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d72e9296c098e2f2d4c0462c18b05f7cc60e924cce4517c4ab31d9c826f1cc03,PodSandboxId:5cbc5637831f7742cd8e42cd4fca717544596aa84ef73e85af64f2667b0acbff,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721417102186705271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-hc2q6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74cc097-e257-424f-829f-985cc52b4540,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7ba6c75f66e205cd5c0756138bbafaf09207e24352fdc1fbe45e8a5e6adb036,PodSandboxId:4d76416c0a9f695c5af57f0597d4741f0869edde801cd8ee56855c1232e925f8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721417102091893778,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-2dpw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 197d4
43b-5033-44fe-a1d7-939f5acd60b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f4e7d712e6d6af8d321d596f5af869f2f4b365b1009529cdd5014fb0115da09,PodSandboxId:1b1c8e73bc18507769ea4626ce28fbd1d68631117c2c06b11bf75ba4d06debd8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,Crea
tedAt:1721417101372207029,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02810e57-5b03-4b0f-83a4-56623e9a0088,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff82929033a30ab65729bac4c9566287d84afd303444d7e387bda36b442f6f28,PodSandboxId:842241b6a1dd1f8669befd58f1cd56bbae1d8425ae89a463f7352e380c6819ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:172141710140356
1323,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jmgz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6977e800-7ea0-4306-9390-674f30c19d7a,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c995a95003e5bd6fb33c749b03b501a8274f45a1c5847fc2f9481377f8c5898c,PodSandboxId:a7d54f06e2246acf8387306fefd79c63333743bf39419e3fe0d5118ea4f3e29b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721417101337290421,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-821162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2e06ef5725540b092711d4479d77670,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:782b4ec714830c622aa8fc1de76b5a449990a0131e0af90161f96b3e9e6b5cb5,PodSandboxId:c8633ad2c391997dc5bb7650335c64c21abb25e0ff2e394197456f05d8ecdb80,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721417101357064834,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-821162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f885c8d8522f6aea32f981e154240da8,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6cf9f513c5aa0e6536eed0e9ddbd3baf433290e1cadef3b8972759b920e67a5,PodSandboxId:548cf2ba3125443d51d2255d4640b51ed3dd39a9d5acfb53e35c3eb88fd1745a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721417101235540526,Labels:map[string]string{io.kubernetes.containe
r.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-821162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa7c3533c83d5f95cc2660717523ae46,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:802d81d6a0406907b1a09ee4ee2a6aa330d35f6fc9949e0116ce0ef43354c55c,PodSandboxId:e324d69fdb74b6b7bf5c8ebba4b5ae349ecf6c4e17cfa944c9b325e78b879022,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1721417101262379541,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-821162,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85506705e86bfcc0f2d8988dad06a7d1,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:705f767a43ac3b184f0c52f9a8dd5ef5b522a77c553045cd0536b78030c787c2,PodSandboxId:bd5db59c12adb7e2e0e5b82e9019236d0d3aff45435cad47d0cac77c38dabc79,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721417073980927007,Labels:map[string]string{io.kubernetes.container.name: cored
ns,io.kubernetes.pod.name: coredns-5cfdc65f69-hc2q6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74cc097-e257-424f-829f-985cc52b4540,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0c518b2143da72280d1024ce044d00b0434cbe8863b75dcca3522bd2e7fbe9d,PodSandboxId:7567eb1db7ba3b86644e91cb2d64f28a5fa44ab97a0a94fa17c1e54ba1045760,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721417073941064457,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-2dpw5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 197d443b-5033-44fe-a1d7-939f5acd60b6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d91559cc099e6bddbbb42189960b9aa49f4f070791393d54f2543b708cef05f4,PodSandboxId:b8a1781f7a113dcc2f9826600517cf22557815a0e294ca8e121b4f450eb38f1a,Metadata:&
ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721417072461699949,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jmgz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6977e800-7ea0-4306-9390-674f30c19d7a,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cdac3472a1b5cba1771f85cf9d3e03fc0c9db1c3832e7287b8f693919861f7e,PodSandboxId:4479e47c5325ae8349a796286b704b794036665b590fea98a4ad676e68f5a6a9,Metadata:&ContainerMetadata{Name:storage-pro
visioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721417072393254698,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02810e57-5b03-4b0f-83a4-56623e9a0088,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d39dbb23-d4f9-4617-a53b-b297233c1b33 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cfa1aa480be12       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   10 seconds ago      Running             kube-scheduler            2                   c8633ad2c3919       kube-scheduler-kubernetes-upgrade-821162
	2553bab8c33db       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   10 seconds ago      Running             kube-controller-manager   2                   e324d69fdb74b       kube-controller-manager-kubernetes-upgrade-821162
	0f52d76b78449       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   10 seconds ago      Running             kube-apiserver            2                   a7d54f06e2246       kube-apiserver-kubernetes-upgrade-821162
	2a23efc0775be       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   10 seconds ago      Running             etcd                      2                   548cf2ba31254       etcd-kubernetes-upgrade-821162
	d72e9296c098e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   22 seconds ago      Running             coredns                   1                   5cbc5637831f7       coredns-5cfdc65f69-hc2q6
	b7ba6c75f66e2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   22 seconds ago      Running             coredns                   1                   4d76416c0a9f6       coredns-5cfdc65f69-2dpw5
	ff82929033a30       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   23 seconds ago      Running             kube-proxy                1                   842241b6a1dd1       kube-proxy-jmgz9
	6f4e7d712e6d6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   23 seconds ago      Running             storage-provisioner       1                   1b1c8e73bc185       storage-provisioner
	782b4ec714830       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   23 seconds ago      Exited              kube-scheduler            1                   c8633ad2c3919       kube-scheduler-kubernetes-upgrade-821162
	c995a95003e5b       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   23 seconds ago      Exited              kube-apiserver            1                   a7d54f06e2246       kube-apiserver-kubernetes-upgrade-821162
	802d81d6a0406       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   23 seconds ago      Exited              kube-controller-manager   1                   e324d69fdb74b       kube-controller-manager-kubernetes-upgrade-821162
	d6cf9f513c5aa       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   23 seconds ago      Exited              etcd                      1                   548cf2ba31254       etcd-kubernetes-upgrade-821162
	705f767a43ac3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   51 seconds ago      Exited              coredns                   0                   bd5db59c12adb       coredns-5cfdc65f69-hc2q6
	c0c518b2143da       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   51 seconds ago      Exited              coredns                   0                   7567eb1db7ba3       coredns-5cfdc65f69-2dpw5
	d91559cc099e6       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   52 seconds ago      Exited              kube-proxy                0                   b8a1781f7a113       kube-proxy-jmgz9
	2cdac3472a1b5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   52 seconds ago      Exited              storage-provisioner       0                   4479e47c5325a       storage-provisioner
	
	
	==> coredns [705f767a43ac3b184f0c52f9a8dd5ef5b522a77c553045cd0536b78030c787c2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b7ba6c75f66e205cd5c0756138bbafaf09207e24352fdc1fbe45e8a5e6adb036] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [c0c518b2143da72280d1024ce044d00b0434cbe8863b75dcca3522bd2e7fbe9d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d72e9296c098e2f2d4c0462c18b05f7cc60e924cce4517c4ab31d9c826f1cc03] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-821162
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-821162
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 19:24:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-821162
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 19:25:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 19:25:18 +0000   Fri, 19 Jul 2024 19:24:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 19:25:18 +0000   Fri, 19 Jul 2024 19:24:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 19:25:18 +0000   Fri, 19 Jul 2024 19:24:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 19:25:18 +0000   Fri, 19 Jul 2024 19:24:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.58
	  Hostname:    kubernetes-upgrade-821162
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5019762513f34e81bcef1a090d3232bf
	  System UUID:                50197625-13f3-4e81-bcef-1a090d3232bf
	  Boot ID:                    2e20d92f-6ccd-4baf-bb72-c88c1d04fc9f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-2dpw5                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     54s
	  kube-system                 coredns-5cfdc65f69-hc2q6                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     54s
	  kube-system                 etcd-kubernetes-upgrade-821162                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         54s
	  kube-system                 kube-apiserver-kubernetes-upgrade-821162             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-821162    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  kube-system                 kube-proxy-jmgz9                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	  kube-system                 kube-scheduler-kubernetes-upgrade-821162             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21s                kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  Starting                 68s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  66s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  64s (x8 over 68s)  kubelet          Node kubernetes-upgrade-821162 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     64s (x7 over 68s)  kubelet          Node kubernetes-upgrade-821162 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    64s (x8 over 68s)  kubelet          Node kubernetes-upgrade-821162 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           54s                node-controller  Node kubernetes-upgrade-821162 event: Registered Node kubernetes-upgrade-821162 in Controller
	  Normal  Starting                 12s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11s (x8 over 11s)  kubelet          Node kubernetes-upgrade-821162 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11s (x8 over 11s)  kubelet          Node kubernetes-upgrade-821162 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11s (x7 over 11s)  kubelet          Node kubernetes-upgrade-821162 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2s                 node-controller  Node kubernetes-upgrade-821162 event: Registered Node kubernetes-upgrade-821162 in Controller
	
	
	==> dmesg <==
	[  +0.000012] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.406401] systemd-fstab-generator[573]: Ignoring "noauto" option for root device
	[  +0.136426] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.203911] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.176648] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.316078] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +4.262882] systemd-fstab-generator[735]: Ignoring "noauto" option for root device
	[  +0.072867] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.726497] systemd-fstab-generator[855]: Ignoring "noauto" option for root device
	[  +3.785262] kauditd_printk_skb: 57 callbacks suppressed
	[  +8.188492] systemd-fstab-generator[1248]: Ignoring "noauto" option for root device
	[  +0.081280] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.402772] kauditd_printk_skb: 106 callbacks suppressed
	[ +17.347390] systemd-fstab-generator[2215]: Ignoring "noauto" option for root device
	[  +0.152197] systemd-fstab-generator[2227]: Ignoring "noauto" option for root device
	[  +0.184009] systemd-fstab-generator[2241]: Ignoring "noauto" option for root device
	[  +0.145363] systemd-fstab-generator[2253]: Ignoring "noauto" option for root device
	[  +0.297342] systemd-fstab-generator[2281]: Ignoring "noauto" option for root device
	[  +7.017890] systemd-fstab-generator[2430]: Ignoring "noauto" option for root device
	[  +0.084713] kauditd_printk_skb: 100 callbacks suppressed
	[Jul19 19:25] systemd-fstab-generator[3437]: Ignoring "noauto" option for root device
	[  +0.094880] kauditd_printk_skb: 119 callbacks suppressed
	[  +8.489573] systemd-fstab-generator[3711]: Ignoring "noauto" option for root device
	[  +0.134059] kauditd_printk_skb: 39 callbacks suppressed
	
	
	==> etcd [2a23efc0775be97a0d4cccf7c7b36ae19d4ec228e2960967c0ea7de8e42a8f4d] <==
	{"level":"warn","ts":"2024-07-19T19:25:20.372725Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"317.276094ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5cfdc65f69-2dpw5\" ","response":"range_response_count:1 size:5229"}
	{"level":"info","ts":"2024-07-19T19:25:20.372788Z","caller":"traceutil/trace.go:171","msg":"trace[601452453] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5cfdc65f69-2dpw5; range_end:; response_count:1; response_revision:458; }","duration":"317.318183ms","start":"2024-07-19T19:25:20.055438Z","end":"2024-07-19T19:25:20.372756Z","steps":["trace[601452453] 'agreement among raft nodes before linearized reading'  (duration: 317.210634ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T19:25:20.372813Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T19:25:20.055374Z","time spent":"317.433238ms","remote":"127.0.0.1:49484","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":5252,"request content":"key:\"/registry/pods/kube-system/coredns-5cfdc65f69-2dpw5\" "}
	{"level":"warn","ts":"2024-07-19T19:25:20.373033Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T19:25:20.002787Z","time spent":"369.267642ms","remote":"127.0.0.1:49410","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":28,"request content":"compare:<target:MOD key:\"/registry/configmaps/kube-system/coredns\" mod_revision:0 > success:<request_put:<key:\"/registry/configmaps/kube-system/coredns\" value_size:549 >> failure:<>"}
	{"level":"warn","ts":"2024-07-19T19:25:20.373311Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.955928ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:617"}
	{"level":"info","ts":"2024-07-19T19:25:20.373359Z","caller":"traceutil/trace.go:171","msg":"trace[1079492372] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:458; }","duration":"258.020925ms","start":"2024-07-19T19:25:20.115324Z","end":"2024-07-19T19:25:20.373345Z","steps":["trace[1079492372] 'agreement among raft nodes before linearized reading'  (duration: 257.634857ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T19:25:20.520056Z","caller":"traceutil/trace.go:171","msg":"trace[1366352305] transaction","detail":"{read_only:false; response_revision:459; number_of_response:1; }","duration":"128.935709ms","start":"2024-07-19T19:25:20.390669Z","end":"2024-07-19T19:25:20.519605Z","steps":["trace[1366352305] 'process raft request'  (duration: 128.2036ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T19:25:21.271011Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"384.215531ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14324139297633249684 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:coredns\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:coredns\" value_size:328 >> failure:<>>","response":"size:5"}
	{"level":"info","ts":"2024-07-19T19:25:21.271237Z","caller":"traceutil/trace.go:171","msg":"trace[1642909603] linearizableReadLoop","detail":"{readStateIndex:480; appliedIndex:478; }","duration":"878.821698ms","start":"2024-07-19T19:25:20.392407Z","end":"2024-07-19T19:25:21.271228Z","steps":["trace[1642909603] 'read index received'  (duration: 126.550492ms)","trace[1642909603] 'applied index is now lower than readState.Index'  (duration: 752.270731ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T19:25:21.271276Z","caller":"traceutil/trace.go:171","msg":"trace[2001893766] transaction","detail":"{read_only:false; number_of_response:0; response_revision:459; }","duration":"878.940217ms","start":"2024-07-19T19:25:20.392302Z","end":"2024-07-19T19:25:21.271242Z","steps":["trace[2001893766] 'process raft request'  (duration: 494.302285ms)","trace[2001893766] 'compare'  (duration: 384.17627ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T19:25:21.271376Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T19:25:20.392285Z","time spent":"879.052777ms","remote":"127.0.0.1:49630","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":28,"request content":"compare:<target:MOD key:\"/registry/clusterroles/system:coredns\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:coredns\" value_size:328 >> failure:<>"}
	{"level":"warn","ts":"2024-07-19T19:25:21.271431Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"878.983706ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/namespace-controller\" ","response":"range_response_count:1 size:205"}
	{"level":"info","ts":"2024-07-19T19:25:21.271534Z","caller":"traceutil/trace.go:171","msg":"trace[170751900] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/namespace-controller; range_end:; response_count:1; response_revision:460; }","duration":"879.123962ms","start":"2024-07-19T19:25:20.392404Z","end":"2024-07-19T19:25:21.271528Z","steps":["trace[170751900] 'agreement among raft nodes before linearized reading'  (duration: 878.88734ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T19:25:21.271577Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T19:25:20.392384Z","time spent":"879.18471ms","remote":"127.0.0.1:49494","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":228,"request content":"key:\"/registry/serviceaccounts/kube-system/namespace-controller\" "}
	{"level":"info","ts":"2024-07-19T19:25:21.271763Z","caller":"traceutil/trace.go:171","msg":"trace[510909565] transaction","detail":"{read_only:false; response_revision:460; number_of_response:1; }","duration":"873.367481ms","start":"2024-07-19T19:25:20.398386Z","end":"2024-07-19T19:25:21.271754Z","steps":["trace[510909565] 'process raft request'  (duration: 872.770281ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T19:25:21.271986Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T19:25:20.398311Z","time spent":"873.508204ms","remote":"127.0.0.1:49484","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5392,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-5cfdc65f69-2dpw5\" mod_revision:438 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-5cfdc65f69-2dpw5\" value_size:5333 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-5cfdc65f69-2dpw5\" > >"}
	{"level":"warn","ts":"2024-07-19T19:25:21.973519Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"569.525822ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14324139297633249691 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterrolebindings/system:coredns\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/system:coredns\" value_size:348 >> failure:<>>","response":"size:5"}
	{"level":"info","ts":"2024-07-19T19:25:21.973827Z","caller":"traceutil/trace.go:171","msg":"trace[243468094] linearizableReadLoop","detail":"{readStateIndex:482; appliedIndex:481; }","duration":"687.991124ms","start":"2024-07-19T19:25:21.285825Z","end":"2024-07-19T19:25:21.973816Z","steps":["trace[243468094] 'read index received'  (duration: 118.040977ms)","trace[243468094] 'applied index is now lower than readState.Index'  (duration: 569.949467ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T19:25:21.974011Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"688.172232ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/generic-garbage-collector\" ","response":"range_response_count:1 size:216"}
	{"level":"info","ts":"2024-07-19T19:25:21.974099Z","caller":"traceutil/trace.go:171","msg":"trace[53497886] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/generic-garbage-collector; range_end:; response_count:1; response_revision:460; }","duration":"688.265706ms","start":"2024-07-19T19:25:21.285823Z","end":"2024-07-19T19:25:21.974089Z","steps":["trace[53497886] 'agreement among raft nodes before linearized reading'  (duration: 688.107649ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T19:25:21.975794Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T19:25:21.285801Z","time spent":"689.976481ms","remote":"127.0.0.1:49494","response type":"/etcdserverpb.KV/Range","request count":0,"request size":65,"response count":1,"response size":239,"request content":"key:\"/registry/serviceaccounts/kube-system/generic-garbage-collector\" "}
	{"level":"info","ts":"2024-07-19T19:25:21.975929Z","caller":"traceutil/trace.go:171","msg":"trace[1487526376] transaction","detail":"{read_only:false; number_of_response:0; response_revision:460; }","duration":"691.28585ms","start":"2024-07-19T19:25:21.284631Z","end":"2024-07-19T19:25:21.975917Z","steps":["trace[1487526376] 'process raft request'  (duration: 119.225524ms)","trace[1487526376] 'compare'  (duration: 569.491792ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T19:25:21.974392Z","caller":"traceutil/trace.go:171","msg":"trace[1922342667] transaction","detail":"{read_only:false; response_revision:461; number_of_response:1; }","duration":"687.047127ms","start":"2024-07-19T19:25:21.287332Z","end":"2024-07-19T19:25:21.974379Z","steps":["trace[1922342667] 'process raft request'  (duration: 686.425492ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T19:25:21.97601Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T19:25:21.284621Z","time spent":"691.354942ms","remote":"127.0.0.1:49642","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":28,"request content":"compare:<target:MOD key:\"/registry/clusterrolebindings/system:coredns\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/system:coredns\" value_size:348 >> failure:<>"}
	{"level":"warn","ts":"2024-07-19T19:25:21.976131Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T19:25:21.287322Z","time spent":"688.746745ms","remote":"127.0.0.1:49484","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5214,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-5cfdc65f69-2dpw5\" mod_revision:460 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-5cfdc65f69-2dpw5\" value_size:5155 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-5cfdc65f69-2dpw5\" > >"}
	
	
	==> etcd [d6cf9f513c5aa0e6536eed0e9ddbd3baf433290e1cadef3b8972759b920e67a5] <==
	{"level":"info","ts":"2024-07-19T19:25:02.96032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"da8d605abec0c6c9 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-19T19:25:02.960369Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"da8d605abec0c6c9 received MsgPreVoteResp from da8d605abec0c6c9 at term 2"}
	{"level":"info","ts":"2024-07-19T19:25:02.960406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"da8d605abec0c6c9 became candidate at term 3"}
	{"level":"info","ts":"2024-07-19T19:25:02.96043Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"da8d605abec0c6c9 received MsgVoteResp from da8d605abec0c6c9 at term 3"}
	{"level":"info","ts":"2024-07-19T19:25:02.9605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"da8d605abec0c6c9 became leader at term 3"}
	{"level":"info","ts":"2024-07-19T19:25:02.960526Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: da8d605abec0c6c9 elected leader da8d605abec0c6c9 at term 3"}
	{"level":"info","ts":"2024-07-19T19:25:02.962888Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"da8d605abec0c6c9","local-member-attributes":"{Name:kubernetes-upgrade-821162 ClientURLs:[https://192.168.61.58:2379]}","request-path":"/0/members/da8d605abec0c6c9/attributes","cluster-id":"2d1820130fad6930","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-19T19:25:02.963105Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T19:25:02.963967Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-19T19:25:02.964757Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.58:2379"}
	{"level":"info","ts":"2024-07-19T19:25:02.965616Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T19:25:02.966709Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-19T19:25:02.967827Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-19T19:25:02.970517Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T19:25:02.970569Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-19T19:25:12.460634Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-19T19:25:12.460685Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"kubernetes-upgrade-821162","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.58:2380"],"advertise-client-urls":["https://192.168.61.58:2379"]}
	{"level":"warn","ts":"2024-07-19T19:25:12.460751Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.61.58:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T19:25:12.460783Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.61.58:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T19:25:12.460856Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T19:25:12.460865Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-19T19:25:12.462364Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"da8d605abec0c6c9","current-leader-member-id":"da8d605abec0c6c9"}
	{"level":"info","ts":"2024-07-19T19:25:12.465867Z","caller":"embed/etcd.go:580","msg":"stopping serving peer traffic","address":"192.168.61.58:2380"}
	{"level":"info","ts":"2024-07-19T19:25:12.465978Z","caller":"embed/etcd.go:585","msg":"stopped serving peer traffic","address":"192.168.61.58:2380"}
	{"level":"info","ts":"2024-07-19T19:25:12.465992Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"kubernetes-upgrade-821162","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.58:2380"],"advertise-client-urls":["https://192.168.61.58:2379"]}
	
	
	==> kernel <==
	 19:25:25 up 1 min,  0 users,  load average: 1.31, 0.35, 0.12
	Linux kubernetes-upgrade-821162 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0f52d76b784495fe69c09061d23d94598084c186d0e39a23ecd23bac16fb8d70] <==
	I0719 19:25:17.883671       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0719 19:25:17.883751       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0719 19:25:17.883774       1 shared_informer.go:320] Caches are synced for configmaps
	I0719 19:25:17.883818       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0719 19:25:17.885563       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0719 19:25:17.885595       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0719 19:25:17.888211       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0719 19:25:17.888254       1 policy_source.go:224] refreshing policies
	I0719 19:25:17.896012       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0719 19:25:17.912142       1 controller.go:615] quota admission added evaluator for: endpoints
	I0719 19:25:17.934544       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0719 19:25:17.934797       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0719 19:25:17.966385       1 aggregator.go:171] initial CRD sync complete...
	I0719 19:25:17.966431       1 autoregister_controller.go:144] Starting autoregister controller
	I0719 19:25:17.966474       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0719 19:25:17.966485       1 cache.go:39] Caches are synced for autoregister controller
	I0719 19:25:17.986586       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0719 19:25:18.737113       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0719 19:25:19.010141       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.61.58]
	I0719 19:25:19.017540       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0719 19:25:21.991782       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0719 19:25:22.018186       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0719 19:25:22.065097       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0719 19:25:22.115158       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0719 19:25:22.127846       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [c995a95003e5bd6fb33c749b03b501a8274f45a1c5847fc2f9481377f8c5898c] <==
	I0719 19:25:04.582813       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0719 19:25:04.582863       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0719 19:25:04.583167       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0719 19:25:04.586605       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0719 19:25:04.610903       1 controller.go:157] Shutting down quota evaluator
	I0719 19:25:04.610939       1 controller.go:176] quota evaluator worker shutdown
	I0719 19:25:04.611115       1 controller.go:176] quota evaluator worker shutdown
	I0719 19:25:04.611139       1 controller.go:176] quota evaluator worker shutdown
	I0719 19:25:04.611148       1 controller.go:176] quota evaluator worker shutdown
	I0719 19:25:04.611156       1 controller.go:176] quota evaluator worker shutdown
	I0719 19:25:04.613838       1 shared_informer.go:320] Caches are synced for node_authorizer
	W0719 19:25:05.361920       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0719 19:25:05.364898       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0719 19:25:06.362634       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0719 19:25:06.364382       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0719 19:25:07.362865       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0719 19:25:07.364821       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0719 19:25:08.362513       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0719 19:25:08.364327       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0719 19:25:09.362791       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0719 19:25:09.364509       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0719 19:25:10.362145       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0719 19:25:10.363943       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0719 19:25:11.363103       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0719 19:25:11.364990       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-controller-manager [2553bab8c33db29ad8e78e2f3c589577b90cc3328551ed4df600550154caf8e3] <==
	I0719 19:25:23.203832       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-821162"
	I0719 19:25:23.203881       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0719 19:25:23.215870       1 shared_informer.go:320] Caches are synced for endpoint
	I0719 19:25:23.222637       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0719 19:25:23.227552       1 shared_informer.go:320] Caches are synced for GC
	I0719 19:25:23.235650       1 shared_informer.go:320] Caches are synced for PVC protection
	I0719 19:25:23.241696       1 shared_informer.go:320] Caches are synced for job
	I0719 19:25:23.242889       1 shared_informer.go:320] Caches are synced for ephemeral
	I0719 19:25:23.251789       1 shared_informer.go:320] Caches are synced for cronjob
	I0719 19:25:23.253527       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0719 19:25:23.253618       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-821162"
	I0719 19:25:23.254357       1 shared_informer.go:320] Caches are synced for expand
	I0719 19:25:23.257236       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0719 19:25:23.261351       1 shared_informer.go:320] Caches are synced for disruption
	I0719 19:25:23.263639       1 shared_informer.go:320] Caches are synced for stateful set
	I0719 19:25:23.308352       1 shared_informer.go:320] Caches are synced for persistent volume
	I0719 19:25:23.316302       1 shared_informer.go:320] Caches are synced for PV protection
	I0719 19:25:23.318759       1 shared_informer.go:320] Caches are synced for service account
	I0719 19:25:23.337650       1 shared_informer.go:320] Caches are synced for attach detach
	I0719 19:25:23.372000       1 shared_informer.go:320] Caches are synced for resource quota
	I0719 19:25:23.385055       1 shared_informer.go:320] Caches are synced for namespace
	I0719 19:25:23.392736       1 shared_informer.go:320] Caches are synced for garbage collector
	I0719 19:25:23.392836       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0719 19:25:23.405898       1 shared_informer.go:320] Caches are synced for resource quota
	I0719 19:25:23.429414       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [802d81d6a0406907b1a09ee4ee2a6aa330d35f6fc9949e0116ce0ef43354c55c] <==
	I0719 19:25:02.801895       1 serving.go:386] Generated self-signed cert in-memory
	I0719 19:25:03.527532       1 controllermanager.go:188] "Starting" version="v1.31.0-beta.0"
	I0719 19:25:03.527570       1 controllermanager.go:190] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 19:25:03.528994       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0719 19:25:03.529562       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0719 19:25:03.529655       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0719 19:25:03.529738       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-proxy [d91559cc099e6bddbbb42189960b9aa49f4f070791393d54f2543b708cef05f4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0719 19:24:32.660988       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0719 19:24:32.674400       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.61.58"]
	E0719 19:24:32.674596       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0719 19:24:32.706610       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0719 19:24:32.706692       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 19:24:32.706740       1 server_linux.go:170] "Using iptables Proxier"
	I0719 19:24:32.708936       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0719 19:24:32.709320       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0719 19:24:32.709364       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 19:24:32.710706       1 config.go:197] "Starting service config controller"
	I0719 19:24:32.710749       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 19:24:32.710784       1 config.go:104] "Starting endpoint slice config controller"
	I0719 19:24:32.710800       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 19:24:32.711409       1 config.go:326] "Starting node config controller"
	I0719 19:24:32.712058       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 19:24:32.811726       1 shared_informer.go:320] Caches are synced for service config
	I0719 19:24:32.811827       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 19:24:32.812402       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [ff82929033a30ab65729bac4c9566287d84afd303444d7e387bda36b442f6f28] <==
	W0719 19:25:04.635627       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.58:8443: connect: connection refused
	E0719 19:25:04.635668       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.61.58:8443: connect: connection refused" logger="UnhandledError"
	W0719 19:25:04.635727       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.58:8443: connect: connection refused
	E0719 19:25:04.635764       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.61.58:8443: connect: connection refused" logger="UnhandledError"
	W0719 19:25:05.532609       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-821162&limit=500&resourceVersion=0": dial tcp 192.168.61.58:8443: connect: connection refused
	E0719 19:25:05.532653       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-821162&limit=500&resourceVersion=0\": dial tcp 192.168.61.58:8443: connect: connection refused" logger="UnhandledError"
	W0719 19:25:05.673766       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.58:8443: connect: connection refused
	E0719 19:25:05.673860       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.61.58:8443: connect: connection refused" logger="UnhandledError"
	W0719 19:25:05.989171       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.58:8443: connect: connection refused
	E0719 19:25:05.989225       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.61.58:8443: connect: connection refused" logger="UnhandledError"
	W0719 19:25:07.543749       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.58:8443: connect: connection refused
	E0719 19:25:07.543867       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.61.58:8443: connect: connection refused" logger="UnhandledError"
	W0719 19:25:07.820027       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-821162&limit=500&resourceVersion=0": dial tcp 192.168.61.58:8443: connect: connection refused
	E0719 19:25:07.820148       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-821162&limit=500&resourceVersion=0\": dial tcp 192.168.61.58:8443: connect: connection refused" logger="UnhandledError"
	W0719 19:25:08.569795       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.58:8443: connect: connection refused
	E0719 19:25:08.569875       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.61.58:8443: connect: connection refused" logger="UnhandledError"
	W0719 19:25:12.534892       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.58:8443: connect: connection refused
	E0719 19:25:12.534956       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.61.58:8443: connect: connection refused" logger="UnhandledError"
	W0719 19:25:13.045909       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.58:8443: connect: connection refused
	E0719 19:25:13.045963       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.61.58:8443: connect: connection refused" logger="UnhandledError"
	W0719 19:25:13.134973       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-821162&limit=500&resourceVersion=0": dial tcp 192.168.61.58:8443: connect: connection refused
	E0719 19:25:13.135045       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-821162&limit=500&resourceVersion=0\": dial tcp 192.168.61.58:8443: connect: connection refused" logger="UnhandledError"
	I0719 19:25:21.334188       1 shared_informer.go:320] Caches are synced for service config
	I0719 19:25:21.735113       1 shared_informer.go:320] Caches are synced for node config
	I0719 19:25:24.434549       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [782b4ec714830c622aa8fc1de76b5a449990a0131e0af90161f96b3e9e6b5cb5] <==
	
	
	==> kube-scheduler [cfa1aa480be1282429915613ebd7a2d8a1856d67c22838cce2774d225c79a7ca] <==
	I0719 19:25:15.860217       1 serving.go:386] Generated self-signed cert in-memory
	W0719 19:25:17.855920       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0719 19:25:17.856102       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 19:25:17.856202       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0719 19:25:17.856238       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0719 19:25:17.915574       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0719 19:25:17.915612       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 19:25:17.921846       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0719 19:25:17.921894       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 19:25:17.925757       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0719 19:25:17.929674       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0719 19:25:18.023123       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 19 19:25:14 kubernetes-upgrade-821162 kubelet[3444]: I0719 19:25:14.172698    3444 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c2e06ef5725540b092711d4479d77670-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-821162\" (UID: \"c2e06ef5725540b092711d4479d77670\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-821162"
	Jul 19 19:25:14 kubernetes-upgrade-821162 kubelet[3444]: I0719 19:25:14.172719    3444 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/85506705e86bfcc0f2d8988dad06a7d1-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-821162\" (UID: \"85506705e86bfcc0f2d8988dad06a7d1\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-821162"
	Jul 19 19:25:14 kubernetes-upgrade-821162 kubelet[3444]: I0719 19:25:14.172742    3444 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/85506705e86bfcc0f2d8988dad06a7d1-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-821162\" (UID: \"85506705e86bfcc0f2d8988dad06a7d1\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-821162"
	Jul 19 19:25:14 kubernetes-upgrade-821162 kubelet[3444]: I0719 19:25:14.172771    3444 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/85506705e86bfcc0f2d8988dad06a7d1-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-821162\" (UID: \"85506705e86bfcc0f2d8988dad06a7d1\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-821162"
	Jul 19 19:25:14 kubernetes-upgrade-821162 kubelet[3444]: I0719 19:25:14.172834    3444 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f885c8d8522f6aea32f981e154240da8-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-821162\" (UID: \"f885c8d8522f6aea32f981e154240da8\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-821162"
	Jul 19 19:25:14 kubernetes-upgrade-821162 kubelet[3444]: I0719 19:25:14.172867    3444 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/aa7c3533c83d5f95cc2660717523ae46-etcd-data\") pod \"etcd-kubernetes-upgrade-821162\" (UID: \"aa7c3533c83d5f95cc2660717523ae46\") " pod="kube-system/etcd-kubernetes-upgrade-821162"
	Jul 19 19:25:14 kubernetes-upgrade-821162 kubelet[3444]: I0719 19:25:14.278229    3444 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-821162"
	Jul 19 19:25:14 kubernetes-upgrade-821162 kubelet[3444]: E0719 19:25:14.279229    3444 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.58:8443: connect: connection refused" node="kubernetes-upgrade-821162"
	Jul 19 19:25:14 kubernetes-upgrade-821162 kubelet[3444]: I0719 19:25:14.417048    3444 scope.go:117] "RemoveContainer" containerID="d6cf9f513c5aa0e6536eed0e9ddbd3baf433290e1cadef3b8972759b920e67a5"
	Jul 19 19:25:14 kubernetes-upgrade-821162 kubelet[3444]: I0719 19:25:14.420607    3444 scope.go:117] "RemoveContainer" containerID="c995a95003e5bd6fb33c749b03b501a8274f45a1c5847fc2f9481377f8c5898c"
	Jul 19 19:25:14 kubernetes-upgrade-821162 kubelet[3444]: I0719 19:25:14.428618    3444 scope.go:117] "RemoveContainer" containerID="802d81d6a0406907b1a09ee4ee2a6aa330d35f6fc9949e0116ce0ef43354c55c"
	Jul 19 19:25:14 kubernetes-upgrade-821162 kubelet[3444]: I0719 19:25:14.428742    3444 scope.go:117] "RemoveContainer" containerID="782b4ec714830c622aa8fc1de76b5a449990a0131e0af90161f96b3e9e6b5cb5"
	Jul 19 19:25:14 kubernetes-upgrade-821162 kubelet[3444]: E0719 19:25:14.572861    3444 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-821162?timeout=10s\": dial tcp 192.168.61.58:8443: connect: connection refused" interval="800ms"
	Jul 19 19:25:14 kubernetes-upgrade-821162 kubelet[3444]: I0719 19:25:14.684642    3444 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-821162"
	Jul 19 19:25:14 kubernetes-upgrade-821162 kubelet[3444]: E0719 19:25:14.685159    3444 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.58:8443: connect: connection refused" node="kubernetes-upgrade-821162"
	Jul 19 19:25:15 kubernetes-upgrade-821162 kubelet[3444]: I0719 19:25:15.486802    3444 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-821162"
	Jul 19 19:25:17 kubernetes-upgrade-821162 kubelet[3444]: I0719 19:25:17.950299    3444 apiserver.go:52] "Watching apiserver"
	Jul 19 19:25:18 kubernetes-upgrade-821162 kubelet[3444]: I0719 19:25:18.051191    3444 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-821162"
	Jul 19 19:25:18 kubernetes-upgrade-821162 kubelet[3444]: I0719 19:25:18.051289    3444 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-821162"
	Jul 19 19:25:18 kubernetes-upgrade-821162 kubelet[3444]: I0719 19:25:18.051313    3444 kuberuntime_manager.go:1524] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 19 19:25:18 kubernetes-upgrade-821162 kubelet[3444]: I0719 19:25:18.052307    3444 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 19 19:25:18 kubernetes-upgrade-821162 kubelet[3444]: I0719 19:25:18.066759    3444 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Jul 19 19:25:18 kubernetes-upgrade-821162 kubelet[3444]: I0719 19:25:18.073555    3444 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6977e800-7ea0-4306-9390-674f30c19d7a-xtables-lock\") pod \"kube-proxy-jmgz9\" (UID: \"6977e800-7ea0-4306-9390-674f30c19d7a\") " pod="kube-system/kube-proxy-jmgz9"
	Jul 19 19:25:18 kubernetes-upgrade-821162 kubelet[3444]: I0719 19:25:18.073805    3444 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6977e800-7ea0-4306-9390-674f30c19d7a-lib-modules\") pod \"kube-proxy-jmgz9\" (UID: \"6977e800-7ea0-4306-9390-674f30c19d7a\") " pod="kube-system/kube-proxy-jmgz9"
	Jul 19 19:25:18 kubernetes-upgrade-821162 kubelet[3444]: I0719 19:25:18.074130    3444 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/02810e57-5b03-4b0f-83a4-56623e9a0088-tmp\") pod \"storage-provisioner\" (UID: \"02810e57-5b03-4b0f-83a4-56623e9a0088\") " pod="kube-system/storage-provisioner"
	
	
	==> storage-provisioner [2cdac3472a1b5cba1771f85cf9d3e03fc0c9db1c3832e7287b8f693919861f7e] <==
	I0719 19:24:32.499177       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	
	
	==> storage-provisioner [6f4e7d712e6d6af8d321d596f5af869f2f4b365b1009529cdd5014fb0115da09] <==
	I0719 19:25:02.500161       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 19:25:04.593003       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 19:25:04.593875       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E0719 19:25:04.594670       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0719 19:25:08.047580       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0719 19:25:12.305610       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	I0719 19:25:18.006382       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0719 19:25:18.006637       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-821162_0830addb-3eec-43ca-897d-fe057da58103!
	I0719 19:25:18.007415       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"06b4f802-3619-4e2a-b20e-7b3331411f67", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-821162_0830addb-3eec-43ca-897d-fe057da58103 became leader
	I0719 19:25:18.108560       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-821162_0830addb-3eec-43ca-897d-fe057da58103!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-821162 -n kubernetes-upgrade-821162
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-821162 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-821162" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-821162
--- FAIL: TestKubernetesUpgrade (418.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (271.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-570369 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-570369 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m31.061867056s)

                                                
                                                
-- stdout --
	* [old-k8s-version-570369] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19307
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19307-14841/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19307-14841/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-570369" primary control-plane node in "old-k8s-version-570369" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 19:24:46.778481   65113 out.go:291] Setting OutFile to fd 1 ...
	I0719 19:24:46.778623   65113 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:24:46.778634   65113 out.go:304] Setting ErrFile to fd 2...
	I0719 19:24:46.778640   65113 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:24:46.778931   65113 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 19:24:46.779784   65113 out.go:298] Setting JSON to false
	I0719 19:24:46.781114   65113 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7630,"bootTime":1721409457,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 19:24:46.781235   65113 start.go:139] virtualization: kvm guest
	I0719 19:24:46.976105   65113 out.go:177] * [old-k8s-version-570369] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 19:24:47.194799   65113 notify.go:220] Checking for updates...
	I0719 19:24:47.335830   65113 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 19:24:47.514616   65113 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 19:24:47.665795   65113 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:24:47.692280   65113 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 19:24:47.787113   65113 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 19:24:47.876188   65113 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 19:24:47.969577   65113 config.go:182] Loaded profile config "kubernetes-upgrade-821162": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 19:24:47.969746   65113 config.go:182] Loaded profile config "pause-666146": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:24:47.969855   65113 config.go:182] Loaded profile config "running-upgrade-197194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0719 19:24:47.969983   65113 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 19:24:48.033586   65113 out.go:177] * Using the kvm2 driver based on user configuration
	I0719 19:24:48.063953   65113 start.go:297] selected driver: kvm2
	I0719 19:24:48.063990   65113 start.go:901] validating driver "kvm2" against <nil>
	I0719 19:24:48.064014   65113 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 19:24:48.065118   65113 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 19:24:48.065223   65113 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19307-14841/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 19:24:48.081619   65113 install.go:137] /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0719 19:24:48.081667   65113 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 19:24:48.081883   65113 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 19:24:48.081932   65113 cni.go:84] Creating CNI manager for ""
	I0719 19:24:48.081944   65113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:24:48.081952   65113 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 19:24:48.082004   65113 start.go:340] cluster config:
	{Name:old-k8s-version-570369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:24:48.082093   65113 iso.go:125] acquiring lock: {Name:mka126a3710ab8a115eb2a3299b735524430e5f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 19:24:48.199076   65113 out.go:177] * Starting "old-k8s-version-570369" primary control-plane node in "old-k8s-version-570369" cluster
	I0719 19:24:48.281915   65113 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 19:24:48.281993   65113 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0719 19:24:48.282002   65113 cache.go:56] Caching tarball of preloaded images
	I0719 19:24:48.282074   65113 preload.go:172] Found /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 19:24:48.282084   65113 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0719 19:24:48.282190   65113 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/config.json ...
	I0719 19:24:48.282206   65113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/config.json: {Name:mk54da9450b8b164634b2b5375a5b0e38c46342b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:24:48.282420   65113 start.go:360] acquireMachinesLock for old-k8s-version-570369: {Name:mk45aeee801587237bb965fa260501e9fac6f233 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 19:24:52.052694   65113 start.go:364] duration metric: took 3.770223968s to acquireMachinesLock for "old-k8s-version-570369"
	I0719 19:24:52.052754   65113 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-570369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 19:24:52.052919   65113 start.go:125] createHost starting for "" (driver="kvm2")
	I0719 19:24:52.054910   65113 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 19:24:52.055098   65113 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:24:52.055145   65113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:24:52.072227   65113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46309
	I0719 19:24:52.072645   65113 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:24:52.073354   65113 main.go:141] libmachine: Using API Version  1
	I0719 19:24:52.073377   65113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:24:52.073778   65113 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:24:52.073984   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetMachineName
	I0719 19:24:52.074165   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:24:52.074342   65113 start.go:159] libmachine.API.Create for "old-k8s-version-570369" (driver="kvm2")
	I0719 19:24:52.074370   65113 client.go:168] LocalClient.Create starting
	I0719 19:24:52.074402   65113 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem
	I0719 19:24:52.074441   65113 main.go:141] libmachine: Decoding PEM data...
	I0719 19:24:52.074464   65113 main.go:141] libmachine: Parsing certificate...
	I0719 19:24:52.074546   65113 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem
	I0719 19:24:52.074578   65113 main.go:141] libmachine: Decoding PEM data...
	I0719 19:24:52.074595   65113 main.go:141] libmachine: Parsing certificate...
	I0719 19:24:52.074622   65113 main.go:141] libmachine: Running pre-create checks...
	I0719 19:24:52.074640   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .PreCreateCheck
	I0719 19:24:52.075033   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetConfigRaw
	I0719 19:24:52.075522   65113 main.go:141] libmachine: Creating machine...
	I0719 19:24:52.075549   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .Create
	I0719 19:24:52.075711   65113 main.go:141] libmachine: (old-k8s-version-570369) Creating KVM machine...
	I0719 19:24:52.076974   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | found existing default KVM network
	I0719 19:24:52.078749   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:24:52.078586   65152 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001a7370}
	I0719 19:24:52.078778   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | created network xml: 
	I0719 19:24:52.078788   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | <network>
	I0719 19:24:52.078798   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG |   <name>mk-old-k8s-version-570369</name>
	I0719 19:24:52.078806   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG |   <dns enable='no'/>
	I0719 19:24:52.078814   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG |   
	I0719 19:24:52.078823   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0719 19:24:52.078832   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG |     <dhcp>
	I0719 19:24:52.078841   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0719 19:24:52.078850   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG |     </dhcp>
	I0719 19:24:52.078862   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG |   </ip>
	I0719 19:24:52.078870   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG |   
	I0719 19:24:52.078880   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | </network>
	I0719 19:24:52.078889   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | 
	I0719 19:24:52.084238   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | trying to create private KVM network mk-old-k8s-version-570369 192.168.39.0/24...
	I0719 19:24:52.161823   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | private KVM network mk-old-k8s-version-570369 192.168.39.0/24 created
	I0719 19:24:52.161864   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:24:52.161777   65152 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 19:24:52.161877   65113 main.go:141] libmachine: (old-k8s-version-570369) Setting up store path in /home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369 ...
	I0719 19:24:52.161902   65113 main.go:141] libmachine: (old-k8s-version-570369) Building disk image from file:///home/jenkins/minikube-integration/19307-14841/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 19:24:52.161917   65113 main.go:141] libmachine: (old-k8s-version-570369) Downloading /home/jenkins/minikube-integration/19307-14841/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19307-14841/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 19:24:52.405997   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:24:52.405890   65152 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa...
	I0719 19:24:52.597198   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:24:52.597048   65152 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/old-k8s-version-570369.rawdisk...
	I0719 19:24:52.597233   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | Writing magic tar header
	I0719 19:24:52.597251   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | Writing SSH key tar header
	I0719 19:24:52.597266   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:24:52.597170   65152 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369 ...
	I0719 19:24:52.597282   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369
	I0719 19:24:52.597319   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841/.minikube/machines
	I0719 19:24:52.597352   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 19:24:52.597367   65113 main.go:141] libmachine: (old-k8s-version-570369) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369 (perms=drwx------)
	I0719 19:24:52.597411   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841
	I0719 19:24:52.597443   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0719 19:24:52.597459   65113 main.go:141] libmachine: (old-k8s-version-570369) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841/.minikube/machines (perms=drwxr-xr-x)
	I0719 19:24:52.597472   65113 main.go:141] libmachine: (old-k8s-version-570369) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841/.minikube (perms=drwxr-xr-x)
	I0719 19:24:52.597482   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | Checking permissions on dir: /home/jenkins
	I0719 19:24:52.597501   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | Checking permissions on dir: /home
	I0719 19:24:52.597524   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | Skipping /home - not owner
	I0719 19:24:52.597543   65113 main.go:141] libmachine: (old-k8s-version-570369) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841 (perms=drwxrwxr-x)
	I0719 19:24:52.597575   65113 main.go:141] libmachine: (old-k8s-version-570369) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0719 19:24:52.597588   65113 main.go:141] libmachine: (old-k8s-version-570369) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0719 19:24:52.597595   65113 main.go:141] libmachine: (old-k8s-version-570369) Creating domain...
	I0719 19:24:52.598836   65113 main.go:141] libmachine: (old-k8s-version-570369) define libvirt domain using xml: 
	I0719 19:24:52.598857   65113 main.go:141] libmachine: (old-k8s-version-570369) <domain type='kvm'>
	I0719 19:24:52.598869   65113 main.go:141] libmachine: (old-k8s-version-570369)   <name>old-k8s-version-570369</name>
	I0719 19:24:52.598879   65113 main.go:141] libmachine: (old-k8s-version-570369)   <memory unit='MiB'>2200</memory>
	I0719 19:24:52.598890   65113 main.go:141] libmachine: (old-k8s-version-570369)   <vcpu>2</vcpu>
	I0719 19:24:52.598897   65113 main.go:141] libmachine: (old-k8s-version-570369)   <features>
	I0719 19:24:52.598905   65113 main.go:141] libmachine: (old-k8s-version-570369)     <acpi/>
	I0719 19:24:52.598913   65113 main.go:141] libmachine: (old-k8s-version-570369)     <apic/>
	I0719 19:24:52.598925   65113 main.go:141] libmachine: (old-k8s-version-570369)     <pae/>
	I0719 19:24:52.598944   65113 main.go:141] libmachine: (old-k8s-version-570369)     
	I0719 19:24:52.598956   65113 main.go:141] libmachine: (old-k8s-version-570369)   </features>
	I0719 19:24:52.598968   65113 main.go:141] libmachine: (old-k8s-version-570369)   <cpu mode='host-passthrough'>
	I0719 19:24:52.598979   65113 main.go:141] libmachine: (old-k8s-version-570369)   
	I0719 19:24:52.598987   65113 main.go:141] libmachine: (old-k8s-version-570369)   </cpu>
	I0719 19:24:52.599009   65113 main.go:141] libmachine: (old-k8s-version-570369)   <os>
	I0719 19:24:52.599032   65113 main.go:141] libmachine: (old-k8s-version-570369)     <type>hvm</type>
	I0719 19:24:52.599045   65113 main.go:141] libmachine: (old-k8s-version-570369)     <boot dev='cdrom'/>
	I0719 19:24:52.599076   65113 main.go:141] libmachine: (old-k8s-version-570369)     <boot dev='hd'/>
	I0719 19:24:52.599090   65113 main.go:141] libmachine: (old-k8s-version-570369)     <bootmenu enable='no'/>
	I0719 19:24:52.599101   65113 main.go:141] libmachine: (old-k8s-version-570369)   </os>
	I0719 19:24:52.599120   65113 main.go:141] libmachine: (old-k8s-version-570369)   <devices>
	I0719 19:24:52.599134   65113 main.go:141] libmachine: (old-k8s-version-570369)     <disk type='file' device='cdrom'>
	I0719 19:24:52.599149   65113 main.go:141] libmachine: (old-k8s-version-570369)       <source file='/home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/boot2docker.iso'/>
	I0719 19:24:52.599161   65113 main.go:141] libmachine: (old-k8s-version-570369)       <target dev='hdc' bus='scsi'/>
	I0719 19:24:52.599180   65113 main.go:141] libmachine: (old-k8s-version-570369)       <readonly/>
	I0719 19:24:52.599200   65113 main.go:141] libmachine: (old-k8s-version-570369)     </disk>
	I0719 19:24:52.599214   65113 main.go:141] libmachine: (old-k8s-version-570369)     <disk type='file' device='disk'>
	I0719 19:24:52.599227   65113 main.go:141] libmachine: (old-k8s-version-570369)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0719 19:24:52.599243   65113 main.go:141] libmachine: (old-k8s-version-570369)       <source file='/home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/old-k8s-version-570369.rawdisk'/>
	I0719 19:24:52.599254   65113 main.go:141] libmachine: (old-k8s-version-570369)       <target dev='hda' bus='virtio'/>
	I0719 19:24:52.599266   65113 main.go:141] libmachine: (old-k8s-version-570369)     </disk>
	I0719 19:24:52.599280   65113 main.go:141] libmachine: (old-k8s-version-570369)     <interface type='network'>
	I0719 19:24:52.599293   65113 main.go:141] libmachine: (old-k8s-version-570369)       <source network='mk-old-k8s-version-570369'/>
	I0719 19:24:52.599303   65113 main.go:141] libmachine: (old-k8s-version-570369)       <model type='virtio'/>
	I0719 19:24:52.599314   65113 main.go:141] libmachine: (old-k8s-version-570369)     </interface>
	I0719 19:24:52.599323   65113 main.go:141] libmachine: (old-k8s-version-570369)     <interface type='network'>
	I0719 19:24:52.599336   65113 main.go:141] libmachine: (old-k8s-version-570369)       <source network='default'/>
	I0719 19:24:52.599343   65113 main.go:141] libmachine: (old-k8s-version-570369)       <model type='virtio'/>
	I0719 19:24:52.599354   65113 main.go:141] libmachine: (old-k8s-version-570369)     </interface>
	I0719 19:24:52.599364   65113 main.go:141] libmachine: (old-k8s-version-570369)     <serial type='pty'>
	I0719 19:24:52.599374   65113 main.go:141] libmachine: (old-k8s-version-570369)       <target port='0'/>
	I0719 19:24:52.599384   65113 main.go:141] libmachine: (old-k8s-version-570369)     </serial>
	I0719 19:24:52.599393   65113 main.go:141] libmachine: (old-k8s-version-570369)     <console type='pty'>
	I0719 19:24:52.599404   65113 main.go:141] libmachine: (old-k8s-version-570369)       <target type='serial' port='0'/>
	I0719 19:24:52.599414   65113 main.go:141] libmachine: (old-k8s-version-570369)     </console>
	I0719 19:24:52.599425   65113 main.go:141] libmachine: (old-k8s-version-570369)     <rng model='virtio'>
	I0719 19:24:52.599439   65113 main.go:141] libmachine: (old-k8s-version-570369)       <backend model='random'>/dev/random</backend>
	I0719 19:24:52.599446   65113 main.go:141] libmachine: (old-k8s-version-570369)     </rng>
	I0719 19:24:52.599457   65113 main.go:141] libmachine: (old-k8s-version-570369)     
	I0719 19:24:52.599464   65113 main.go:141] libmachine: (old-k8s-version-570369)     
	I0719 19:24:52.599474   65113 main.go:141] libmachine: (old-k8s-version-570369)   </devices>
	I0719 19:24:52.599484   65113 main.go:141] libmachine: (old-k8s-version-570369) </domain>
	I0719 19:24:52.599495   65113 main.go:141] libmachine: (old-k8s-version-570369) 
	I0719 19:24:52.603976   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:a7:ee:27 in network default
	I0719 19:24:52.604844   65113 main.go:141] libmachine: (old-k8s-version-570369) Ensuring networks are active...
	I0719 19:24:52.604870   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:24:52.605637   65113 main.go:141] libmachine: (old-k8s-version-570369) Ensuring network default is active
	I0719 19:24:52.606022   65113 main.go:141] libmachine: (old-k8s-version-570369) Ensuring network mk-old-k8s-version-570369 is active
	I0719 19:24:52.606647   65113 main.go:141] libmachine: (old-k8s-version-570369) Getting domain xml...
	I0719 19:24:52.607521   65113 main.go:141] libmachine: (old-k8s-version-570369) Creating domain...
	I0719 19:24:53.965226   65113 main.go:141] libmachine: (old-k8s-version-570369) Waiting to get IP...
	I0719 19:24:53.965979   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:24:53.966462   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:24:53.966485   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:24:53.966430   65152 retry.go:31] will retry after 210.098259ms: waiting for machine to come up
	I0719 19:24:54.177771   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:24:54.178265   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:24:54.178289   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:24:54.178209   65152 retry.go:31] will retry after 346.884591ms: waiting for machine to come up
	I0719 19:24:54.526691   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:24:54.527208   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:24:54.527238   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:24:54.527158   65152 retry.go:31] will retry after 422.024167ms: waiting for machine to come up
	I0719 19:24:54.950926   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:24:54.951575   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:24:54.951604   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:24:54.951523   65152 retry.go:31] will retry after 559.854237ms: waiting for machine to come up
	I0719 19:24:56.020538   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:24:56.021030   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:24:56.021069   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:24:56.021014   65152 retry.go:31] will retry after 670.834049ms: waiting for machine to come up
	I0719 19:24:56.693977   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:24:56.694407   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:24:56.694461   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:24:56.694378   65152 retry.go:31] will retry after 578.315016ms: waiting for machine to come up
	I0719 19:24:57.274679   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:24:57.275110   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:24:57.275149   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:24:57.275066   65152 retry.go:31] will retry after 995.303031ms: waiting for machine to come up
	I0719 19:24:58.271721   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:24:58.272266   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:24:58.272291   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:24:58.272223   65152 retry.go:31] will retry after 1.158986535s: waiting for machine to come up
	I0719 19:24:59.432362   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:24:59.432871   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:24:59.432895   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:24:59.432810   65152 retry.go:31] will retry after 1.588027525s: waiting for machine to come up
	I0719 19:25:01.022797   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:01.023405   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:25:01.023434   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:25:01.023351   65152 retry.go:31] will retry after 2.211395073s: waiting for machine to come up
	I0719 19:25:03.236197   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:03.236803   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:25:03.236838   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:25:03.236760   65152 retry.go:31] will retry after 2.443525966s: waiting for machine to come up
	I0719 19:25:05.682770   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:05.683305   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:25:05.683332   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:25:05.683237   65152 retry.go:31] will retry after 2.660026059s: waiting for machine to come up
	I0719 19:25:08.344803   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:08.345225   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:25:08.345244   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:25:08.345196   65152 retry.go:31] will retry after 3.224130913s: waiting for machine to come up
	I0719 19:25:11.572864   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:11.573420   65113 main.go:141] libmachine: (old-k8s-version-570369) Found IP for machine: 192.168.39.220
	I0719 19:25:11.573445   65113 main.go:141] libmachine: (old-k8s-version-570369) Reserving static IP address...
	I0719 19:25:11.573458   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has current primary IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:11.573823   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-570369", mac: "52:54:00:80:e3:2b", ip: "192.168.39.220"} in network mk-old-k8s-version-570369
	I0719 19:25:11.651654   65113 main.go:141] libmachine: (old-k8s-version-570369) Reserved static IP address: 192.168.39.220
	I0719 19:25:11.651684   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | Getting to WaitForSSH function...
	I0719 19:25:11.651693   65113 main.go:141] libmachine: (old-k8s-version-570369) Waiting for SSH to be available...
	I0719 19:25:11.654924   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:11.655468   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:25:06 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:minikube Clientid:01:52:54:00:80:e3:2b}
	I0719 19:25:11.655496   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:11.655738   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | Using SSH client type: external
	I0719 19:25:11.655758   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | Using SSH private key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa (-rw-------)
	I0719 19:25:11.655799   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.220 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 19:25:11.655816   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | About to run SSH command:
	I0719 19:25:11.655854   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | exit 0
	I0719 19:25:11.787478   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | SSH cmd err, output: <nil>: 
	I0719 19:25:11.787738   65113 main.go:141] libmachine: (old-k8s-version-570369) KVM machine creation complete!
	I0719 19:25:11.788066   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetConfigRaw
	I0719 19:25:11.788613   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:25:11.788791   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:25:11.788926   65113 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0719 19:25:11.788976   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetState
	I0719 19:25:11.790278   65113 main.go:141] libmachine: Detecting operating system of created instance...
	I0719 19:25:11.790295   65113 main.go:141] libmachine: Waiting for SSH to be available...
	I0719 19:25:11.790318   65113 main.go:141] libmachine: Getting to WaitForSSH function...
	I0719 19:25:11.790337   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:25:11.792743   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:11.793146   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:25:06 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:25:11.793174   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:11.793305   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:25:11.793489   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:25:11.793653   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:25:11.793805   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:25:11.793956   65113 main.go:141] libmachine: Using SSH client type: native
	I0719 19:25:11.794169   65113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:25:11.794182   65113 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0719 19:25:11.898823   65113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:25:11.898850   65113 main.go:141] libmachine: Detecting the provisioner...
	I0719 19:25:11.898862   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:25:11.901673   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:11.901959   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:25:06 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:25:11.901981   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:11.902159   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:25:11.902328   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:25:11.902544   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:25:11.902689   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:25:11.902880   65113 main.go:141] libmachine: Using SSH client type: native
	I0719 19:25:11.903047   65113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:25:11.903062   65113 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0719 19:25:12.012012   65113 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0719 19:25:12.012124   65113 main.go:141] libmachine: found compatible host: buildroot
	I0719 19:25:12.012140   65113 main.go:141] libmachine: Provisioning with buildroot...
	I0719 19:25:12.012151   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetMachineName
	I0719 19:25:12.012393   65113 buildroot.go:166] provisioning hostname "old-k8s-version-570369"
	I0719 19:25:12.012418   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetMachineName
	I0719 19:25:12.012624   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:25:12.014909   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.015192   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:25:06 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:25:12.015233   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.015387   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:25:12.015576   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:25:12.015730   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:25:12.015894   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:25:12.016054   65113 main.go:141] libmachine: Using SSH client type: native
	I0719 19:25:12.016220   65113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:25:12.016234   65113 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-570369 && echo "old-k8s-version-570369" | sudo tee /etc/hostname
	I0719 19:25:12.132568   65113 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-570369
	
	I0719 19:25:12.132598   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:25:12.135645   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.136106   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:25:06 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:25:12.136129   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.136269   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:25:12.136490   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:25:12.136695   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:25:12.136840   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:25:12.136996   65113 main.go:141] libmachine: Using SSH client type: native
	I0719 19:25:12.137198   65113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:25:12.137225   65113 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-570369' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-570369/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-570369' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 19:25:12.251957   65113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:25:12.251983   65113 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 19:25:12.252024   65113 buildroot.go:174] setting up certificates
	I0719 19:25:12.252033   65113 provision.go:84] configureAuth start
	I0719 19:25:12.252042   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetMachineName
	I0719 19:25:12.252310   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetIP
	I0719 19:25:12.254951   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.255374   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:25:06 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:25:12.255398   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.255649   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:25:12.258035   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.258326   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:25:06 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:25:12.258348   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.258499   65113 provision.go:143] copyHostCerts
	I0719 19:25:12.258578   65113 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 19:25:12.258591   65113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 19:25:12.258646   65113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 19:25:12.258748   65113 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 19:25:12.258756   65113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 19:25:12.258776   65113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 19:25:12.258868   65113 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 19:25:12.258876   65113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 19:25:12.258895   65113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 19:25:12.258949   65113 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-570369 san=[127.0.0.1 192.168.39.220 localhost minikube old-k8s-version-570369]
	I0719 19:25:12.400305   65113 provision.go:177] copyRemoteCerts
	I0719 19:25:12.400358   65113 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 19:25:12.400383   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:25:12.403043   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.403426   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:25:06 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:25:12.403465   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.403618   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:25:12.403855   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:25:12.404016   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:25:12.404212   65113 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa Username:docker}
	I0719 19:25:12.494424   65113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 19:25:12.518045   65113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0719 19:25:12.539532   65113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 19:25:12.561104   65113 provision.go:87] duration metric: took 309.061303ms to configureAuth
	I0719 19:25:12.561132   65113 buildroot.go:189] setting minikube options for container-runtime
	I0719 19:25:12.561318   65113 config.go:182] Loaded profile config "old-k8s-version-570369": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0719 19:25:12.561386   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:25:12.563974   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.564298   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:25:06 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:25:12.564318   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.564507   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:25:12.564707   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:25:12.564869   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:25:12.565004   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:25:12.565155   65113 main.go:141] libmachine: Using SSH client type: native
	I0719 19:25:12.565315   65113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:25:12.565337   65113 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 19:25:12.845369   65113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 19:25:12.845396   65113 main.go:141] libmachine: Checking connection to Docker...
	I0719 19:25:12.845406   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetURL
	I0719 19:25:12.846599   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | Using libvirt version 6000000
	I0719 19:25:12.848848   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.849174   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:25:06 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:25:12.849204   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.849424   65113 main.go:141] libmachine: Docker is up and running!
	I0719 19:25:12.849439   65113 main.go:141] libmachine: Reticulating splines...
	I0719 19:25:12.849447   65113 client.go:171] duration metric: took 20.775069344s to LocalClient.Create
	I0719 19:25:12.849473   65113 start.go:167] duration metric: took 20.775132716s to libmachine.API.Create "old-k8s-version-570369"
	I0719 19:25:12.849485   65113 start.go:293] postStartSetup for "old-k8s-version-570369" (driver="kvm2")
	I0719 19:25:12.849514   65113 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 19:25:12.849537   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:25:12.849791   65113 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 19:25:12.849816   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:25:12.852334   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.852655   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:25:06 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:25:12.852682   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.852867   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:25:12.853055   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:25:12.853189   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:25:12.853336   65113 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa Username:docker}
	I0719 19:25:12.934181   65113 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 19:25:12.938481   65113 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 19:25:12.938508   65113 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 19:25:12.938572   65113 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 19:25:12.938698   65113 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 19:25:12.938817   65113 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 19:25:12.947871   65113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:25:12.970064   65113 start.go:296] duration metric: took 120.546471ms for postStartSetup
	I0719 19:25:12.970121   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetConfigRaw
	I0719 19:25:12.970687   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetIP
	I0719 19:25:12.973371   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.973938   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:25:06 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:25:12.973972   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.974312   65113 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/config.json ...
	I0719 19:25:12.974498   65113 start.go:128] duration metric: took 20.921561061s to createHost
	I0719 19:25:12.974522   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:25:12.976940   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.977308   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:25:06 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:25:12.977344   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:12.977522   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:25:12.977716   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:25:12.977888   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:25:12.978055   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:25:12.978235   65113 main.go:141] libmachine: Using SSH client type: native
	I0719 19:25:12.978452   65113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:25:12.978465   65113 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0719 19:25:13.084063   65113 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721417113.046702003
	
	I0719 19:25:13.084091   65113 fix.go:216] guest clock: 1721417113.046702003
	I0719 19:25:13.084101   65113 fix.go:229] Guest: 2024-07-19 19:25:13.046702003 +0000 UTC Remote: 2024-07-19 19:25:12.974509819 +0000 UTC m=+26.255657653 (delta=72.192184ms)
	I0719 19:25:13.084119   65113 fix.go:200] guest clock delta is within tolerance: 72.192184ms
	I0719 19:25:13.084124   65113 start.go:83] releasing machines lock for "old-k8s-version-570369", held for 21.031406854s
	I0719 19:25:13.084148   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:25:13.084465   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetIP
	I0719 19:25:13.087226   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:13.087634   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:25:06 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:25:13.087666   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:13.087824   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:25:13.088389   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:25:13.088565   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:25:13.088659   65113 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 19:25:13.088703   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:25:13.088760   65113 ssh_runner.go:195] Run: cat /version.json
	I0719 19:25:13.088783   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:25:13.091368   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:13.091732   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:25:06 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:25:13.091821   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:13.091875   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:13.091904   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:25:13.092069   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:25:13.092221   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:25:13.092242   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:25:06 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:25:13.092279   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:13.092342   65113 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa Username:docker}
	I0719 19:25:13.092418   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:25:13.092555   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:25:13.092703   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:25:13.092877   65113 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa Username:docker}
	I0719 19:25:13.222667   65113 ssh_runner.go:195] Run: systemctl --version
	I0719 19:25:13.229930   65113 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 19:25:13.389152   65113 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 19:25:13.397366   65113 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 19:25:13.397434   65113 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 19:25:13.417521   65113 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 19:25:13.417550   65113 start.go:495] detecting cgroup driver to use...
	I0719 19:25:13.417617   65113 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 19:25:13.435143   65113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 19:25:13.452657   65113 docker.go:217] disabling cri-docker service (if available) ...
	I0719 19:25:13.452726   65113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 19:25:13.470809   65113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 19:25:13.486767   65113 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 19:25:13.624734   65113 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 19:25:13.781004   65113 docker.go:233] disabling docker service ...
	I0719 19:25:13.781083   65113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 19:25:13.796845   65113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 19:25:13.812336   65113 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 19:25:13.962842   65113 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 19:25:14.090594   65113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 19:25:14.104569   65113 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 19:25:14.121918   65113 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0719 19:25:14.121982   65113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:25:14.131932   65113 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 19:25:14.131994   65113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:25:14.144530   65113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:25:14.154355   65113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:25:14.163859   65113 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 19:25:14.173710   65113 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 19:25:14.182589   65113 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 19:25:14.182632   65113 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 19:25:14.194812   65113 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 19:25:14.203921   65113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:25:14.318322   65113 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 19:25:14.495411   65113 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 19:25:14.495488   65113 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 19:25:14.501604   65113 start.go:563] Will wait 60s for crictl version
	I0719 19:25:14.501662   65113 ssh_runner.go:195] Run: which crictl
	I0719 19:25:14.505975   65113 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 19:25:14.546509   65113 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 19:25:14.546618   65113 ssh_runner.go:195] Run: crio --version
	I0719 19:25:14.588192   65113 ssh_runner.go:195] Run: crio --version
	I0719 19:25:14.623792   65113 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0719 19:25:14.624823   65113 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetIP
	I0719 19:25:14.628090   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:14.628501   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:25:06 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:25:14.628524   65113 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:25:14.628748   65113 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 19:25:14.633863   65113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:25:14.649388   65113 kubeadm.go:883] updating cluster {Name:old-k8s-version-570369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-570369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 19:25:14.649528   65113 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 19:25:14.649598   65113 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:25:14.682758   65113 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 19:25:14.682831   65113 ssh_runner.go:195] Run: which lz4
	I0719 19:25:14.687813   65113 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0719 19:25:14.693121   65113 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 19:25:14.693158   65113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0719 19:25:16.198992   65113 crio.go:462] duration metric: took 1.511213625s to copy over tarball
	I0719 19:25:16.199077   65113 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 19:25:19.116871   65113 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.917754749s)
	I0719 19:25:19.116909   65113 crio.go:469] duration metric: took 2.917883315s to extract the tarball
	I0719 19:25:19.116918   65113 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 19:25:19.158429   65113 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:25:19.201773   65113 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 19:25:19.201798   65113 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 19:25:19.201872   65113 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:25:19.201893   65113 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:25:19.201892   65113 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:25:19.201943   65113 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0719 19:25:19.201876   65113 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:25:19.202142   65113 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0719 19:25:19.202114   65113 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0719 19:25:19.201876   65113 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:25:19.203202   65113 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:25:19.203246   65113 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:25:19.203918   65113 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0719 19:25:19.203927   65113 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0719 19:25:19.203936   65113 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:25:19.203938   65113 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0719 19:25:19.204002   65113 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:25:19.204003   65113 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:25:19.431350   65113 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0719 19:25:19.468154   65113 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:25:19.472136   65113 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:25:19.472832   65113 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0719 19:25:19.472886   65113 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0719 19:25:19.472936   65113 ssh_runner.go:195] Run: which crictl
	I0719 19:25:19.474064   65113 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0719 19:25:19.488243   65113 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0719 19:25:19.488498   65113 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:25:19.495067   65113 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:25:19.630243   65113 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0719 19:25:19.630281   65113 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:25:19.630333   65113 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0719 19:25:19.630373   65113 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:25:19.630421   65113 ssh_runner.go:195] Run: which crictl
	I0719 19:25:19.630438   65113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0719 19:25:19.630345   65113 ssh_runner.go:195] Run: which crictl
	I0719 19:25:19.641299   65113 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0719 19:25:19.641337   65113 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0719 19:25:19.641377   65113 ssh_runner.go:195] Run: which crictl
	I0719 19:25:19.641429   65113 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0719 19:25:19.641467   65113 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0719 19:25:19.641479   65113 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0719 19:25:19.641502   65113 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0719 19:25:19.641507   65113 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:25:19.641511   65113 ssh_runner.go:195] Run: which crictl
	I0719 19:25:19.641525   65113 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:25:19.641544   65113 ssh_runner.go:195] Run: which crictl
	I0719 19:25:19.641554   65113 ssh_runner.go:195] Run: which crictl
	I0719 19:25:19.672961   65113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:25:19.673019   65113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:25:19.673044   65113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0719 19:25:19.673084   65113 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0719 19:25:19.673142   65113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:25:19.673149   65113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0719 19:25:19.673202   65113 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:25:19.797213   65113 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0719 19:25:19.797266   65113 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0719 19:25:19.797314   65113 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0719 19:25:19.797373   65113 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0719 19:25:19.797398   65113 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0719 19:25:19.797448   65113 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0719 19:25:20.486805   65113 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:25:20.623457   65113 cache_images.go:92] duration metric: took 1.421643712s to LoadCachedImages
	W0719 19:25:20.623561   65113 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0719 19:25:20.623581   65113 kubeadm.go:934] updating node { 192.168.39.220 8443 v1.20.0 crio true true} ...
	I0719 19:25:20.623709   65113 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-570369 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 19:25:20.623772   65113 ssh_runner.go:195] Run: crio config
	I0719 19:25:20.675456   65113 cni.go:84] Creating CNI manager for ""
	I0719 19:25:20.675478   65113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:25:20.675486   65113 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 19:25:20.675508   65113 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.220 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-570369 NodeName:old-k8s-version-570369 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.220"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.220 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0719 19:25:20.675643   65113 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.220
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-570369"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.220
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.220"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 19:25:20.675701   65113 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0719 19:25:20.686000   65113 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 19:25:20.686071   65113 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 19:25:20.696066   65113 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0719 19:25:20.712140   65113 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 19:25:20.727498   65113 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0719 19:25:20.743954   65113 ssh_runner.go:195] Run: grep 192.168.39.220	control-plane.minikube.internal$ /etc/hosts
	I0719 19:25:20.747396   65113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.220	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:25:20.758968   65113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:25:20.889758   65113 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:25:20.905948   65113 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369 for IP: 192.168.39.220
	I0719 19:25:20.905964   65113 certs.go:194] generating shared ca certs ...
	I0719 19:25:20.905978   65113 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:25:20.906119   65113 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 19:25:20.906155   65113 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 19:25:20.906164   65113 certs.go:256] generating profile certs ...
	I0719 19:25:20.906214   65113 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/client.key
	I0719 19:25:20.906232   65113 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/client.crt with IP's: []
	I0719 19:25:21.039126   65113 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/client.crt ...
	I0719 19:25:21.039158   65113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/client.crt: {Name:mk0c0717927b787bb9a6c29182f44712ce9fe3ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:25:21.039374   65113 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/client.key ...
	I0719 19:25:21.039398   65113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/client.key: {Name:mkcff3a4be82bcd0ab02bb354461057bce059fa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:25:21.039529   65113 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.key.41bb110d
	I0719 19:25:21.039554   65113 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.crt.41bb110d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.220]
	I0719 19:25:21.186701   65113 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.crt.41bb110d ...
	I0719 19:25:21.186739   65113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.crt.41bb110d: {Name:mkadcc6bb9c4c7be3c012cf09af260a515801ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:25:21.186932   65113 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.key.41bb110d ...
	I0719 19:25:21.186962   65113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.key.41bb110d: {Name:mkbfaee0b7d4f20fc1b2d4ab30d4281e256b705e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:25:21.187060   65113 certs.go:381] copying /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.crt.41bb110d -> /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.crt
	I0719 19:25:21.187158   65113 certs.go:385] copying /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.key.41bb110d -> /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.key
	I0719 19:25:21.187239   65113 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/proxy-client.key
	I0719 19:25:21.187263   65113 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/proxy-client.crt with IP's: []
	I0719 19:25:21.350294   65113 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/proxy-client.crt ...
	I0719 19:25:21.350324   65113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/proxy-client.crt: {Name:mkff46854d4c04b2afbe102ad3d53582d31fac2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:25:21.350481   65113 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/proxy-client.key ...
	I0719 19:25:21.350493   65113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/proxy-client.key: {Name:mk45f7518ac8f2682dcf8a6b5556cceea6e85d84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:25:21.350655   65113 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 19:25:21.350692   65113 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 19:25:21.350701   65113 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 19:25:21.350726   65113 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 19:25:21.350767   65113 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 19:25:21.350797   65113 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 19:25:21.350834   65113 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:25:21.351424   65113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 19:25:21.376565   65113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 19:25:21.401761   65113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 19:25:21.426108   65113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 19:25:21.449274   65113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0719 19:25:21.472921   65113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 19:25:21.496418   65113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 19:25:21.547974   65113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 19:25:21.572554   65113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 19:25:21.596778   65113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 19:25:21.619518   65113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 19:25:21.642300   65113 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 19:25:21.658505   65113 ssh_runner.go:195] Run: openssl version
	I0719 19:25:21.664365   65113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 19:25:21.674743   65113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 19:25:21.678951   65113 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 19:25:21.679007   65113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 19:25:21.684625   65113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 19:25:21.694878   65113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 19:25:21.705292   65113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 19:25:21.709676   65113 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 19:25:21.709735   65113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 19:25:21.715061   65113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 19:25:21.725472   65113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 19:25:21.735799   65113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:25:21.739954   65113 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:25:21.740013   65113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:25:21.745450   65113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 19:25:21.756026   65113 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 19:25:21.759745   65113 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 19:25:21.759819   65113 kubeadm.go:392] StartCluster: {Name:old-k8s-version-570369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-570369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:25:21.759920   65113 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 19:25:21.759980   65113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:25:21.798868   65113 cri.go:89] found id: ""
	I0719 19:25:21.798963   65113 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 19:25:21.808723   65113 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:25:21.820804   65113 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:25:21.830669   65113 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:25:21.830691   65113 kubeadm.go:157] found existing configuration files:
	
	I0719 19:25:21.830743   65113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:25:21.839685   65113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:25:21.839762   65113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:25:21.849166   65113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:25:21.858491   65113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:25:21.858541   65113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:25:21.871326   65113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:25:21.881930   65113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:25:21.881990   65113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:25:21.897810   65113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:25:21.907889   65113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:25:21.907940   65113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:25:21.919872   65113 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 19:25:22.042825   65113 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0719 19:25:22.042907   65113 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 19:25:22.200052   65113 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 19:25:22.200195   65113 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 19:25:22.200314   65113 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 19:25:22.419955   65113 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 19:25:22.422797   65113 out.go:204]   - Generating certificates and keys ...
	I0719 19:25:22.422900   65113 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 19:25:22.422992   65113 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 19:25:22.641790   65113 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0719 19:25:22.778960   65113 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0719 19:25:22.945382   65113 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0719 19:25:23.101536   65113 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0719 19:25:23.269669   65113 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0719 19:25:23.270081   65113 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-570369] and IPs [192.168.39.220 127.0.0.1 ::1]
	I0719 19:25:23.399167   65113 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0719 19:25:23.399381   65113 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-570369] and IPs [192.168.39.220 127.0.0.1 ::1]
	I0719 19:25:23.604808   65113 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0719 19:25:24.012178   65113 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0719 19:25:24.426962   65113 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0719 19:25:24.427119   65113 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 19:25:24.547262   65113 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 19:25:24.644618   65113 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 19:25:24.772027   65113 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 19:25:24.984612   65113 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 19:25:25.000921   65113 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 19:25:25.002746   65113 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 19:25:25.002817   65113 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 19:25:25.144478   65113 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 19:25:25.233336   65113 out.go:204]   - Booting up control plane ...
	I0719 19:25:25.233471   65113 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 19:25:25.233585   65113 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 19:25:25.233711   65113 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 19:25:25.233845   65113 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 19:25:25.234043   65113 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 19:26:05.136512   65113 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0719 19:26:05.137223   65113 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:26:05.137501   65113 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:26:10.136419   65113 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:26:10.136605   65113 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:26:20.135577   65113 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:26:20.135784   65113 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:26:40.136336   65113 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:26:40.136592   65113 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:27:20.134981   65113 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:27:20.135257   65113 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:27:20.135282   65113 kubeadm.go:310] 
	I0719 19:27:20.135348   65113 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0719 19:27:20.135422   65113 kubeadm.go:310] 		timed out waiting for the condition
	I0719 19:27:20.135433   65113 kubeadm.go:310] 
	I0719 19:27:20.135479   65113 kubeadm.go:310] 	This error is likely caused by:
	I0719 19:27:20.135522   65113 kubeadm.go:310] 		- The kubelet is not running
	I0719 19:27:20.135655   65113 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0719 19:27:20.135667   65113 kubeadm.go:310] 
	I0719 19:27:20.135796   65113 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0719 19:27:20.135861   65113 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0719 19:27:20.135904   65113 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0719 19:27:20.135910   65113 kubeadm.go:310] 
	I0719 19:27:20.135994   65113 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0719 19:27:20.136058   65113 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0719 19:27:20.136064   65113 kubeadm.go:310] 
	I0719 19:27:20.136139   65113 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0719 19:27:20.136206   65113 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0719 19:27:20.136285   65113 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0719 19:27:20.136368   65113 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0719 19:27:20.136380   65113 kubeadm.go:310] 
	I0719 19:27:20.136964   65113 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 19:27:20.137065   65113 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0719 19:27:20.137146   65113 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0719 19:27:20.137298   65113 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-570369] and IPs [192.168.39.220 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-570369] and IPs [192.168.39.220 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-570369] and IPs [192.168.39.220 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-570369] and IPs [192.168.39.220 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0719 19:27:20.137358   65113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 19:27:20.617718   65113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:27:20.633031   65113 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:27:20.642444   65113 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:27:20.642464   65113 kubeadm.go:157] found existing configuration files:
	
	I0719 19:27:20.642512   65113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:27:20.651559   65113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:27:20.651618   65113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:27:20.661343   65113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:27:20.671275   65113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:27:20.671332   65113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:27:20.684528   65113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:27:20.694337   65113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:27:20.694405   65113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:27:20.706165   65113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:27:20.716845   65113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:27:20.716903   65113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:27:20.726664   65113 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 19:27:20.799892   65113 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0719 19:27:20.799959   65113 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 19:27:20.951514   65113 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 19:27:20.951659   65113 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 19:27:20.951777   65113 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 19:27:21.140034   65113 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 19:27:21.143045   65113 out.go:204]   - Generating certificates and keys ...
	I0719 19:27:21.143160   65113 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 19:27:21.143242   65113 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 19:27:21.143353   65113 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 19:27:21.143444   65113 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 19:27:21.143557   65113 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 19:27:21.143652   65113 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 19:27:21.143735   65113 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 19:27:21.143810   65113 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 19:27:21.143923   65113 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 19:27:21.144047   65113 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 19:27:21.144113   65113 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 19:27:21.144195   65113 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 19:27:21.275403   65113 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 19:27:21.693185   65113 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 19:27:21.818726   65113 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 19:27:22.046389   65113 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 19:27:22.067369   65113 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 19:27:22.068424   65113 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 19:27:22.068510   65113 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 19:27:22.201200   65113 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 19:27:22.203215   65113 out.go:204]   - Booting up control plane ...
	I0719 19:27:22.203358   65113 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 19:27:22.214399   65113 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 19:27:22.215300   65113 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 19:27:22.215960   65113 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 19:27:22.218109   65113 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 19:28:02.220303   65113 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0719 19:28:02.220391   65113 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:28:02.220577   65113 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:28:07.221150   65113 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:28:07.221414   65113 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:28:17.221921   65113 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:28:17.222124   65113 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:28:37.223621   65113 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:28:37.223809   65113 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:29:17.222914   65113 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:29:17.223158   65113 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:29:17.223178   65113 kubeadm.go:310] 
	I0719 19:29:17.223237   65113 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0719 19:29:17.223294   65113 kubeadm.go:310] 		timed out waiting for the condition
	I0719 19:29:17.223304   65113 kubeadm.go:310] 
	I0719 19:29:17.223356   65113 kubeadm.go:310] 	This error is likely caused by:
	I0719 19:29:17.223409   65113 kubeadm.go:310] 		- The kubelet is not running
	I0719 19:29:17.223562   65113 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0719 19:29:17.223586   65113 kubeadm.go:310] 
	I0719 19:29:17.223732   65113 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0719 19:29:17.223801   65113 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0719 19:29:17.223873   65113 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0719 19:29:17.223883   65113 kubeadm.go:310] 
	I0719 19:29:17.223998   65113 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0719 19:29:17.224118   65113 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0719 19:29:17.224129   65113 kubeadm.go:310] 
	I0719 19:29:17.224253   65113 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0719 19:29:17.224406   65113 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0719 19:29:17.224519   65113 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0719 19:29:17.224619   65113 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0719 19:29:17.224632   65113 kubeadm.go:310] 
	I0719 19:29:17.225254   65113 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 19:29:17.225374   65113 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0719 19:29:17.225478   65113 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0719 19:29:17.225562   65113 kubeadm.go:394] duration metric: took 3m55.465747381s to StartCluster
	I0719 19:29:17.225615   65113 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:29:17.225671   65113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:29:17.263830   65113 cri.go:89] found id: ""
	I0719 19:29:17.263877   65113 logs.go:276] 0 containers: []
	W0719 19:29:17.263887   65113 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:29:17.263894   65113 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:29:17.263955   65113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:29:17.294800   65113 cri.go:89] found id: ""
	I0719 19:29:17.294833   65113 logs.go:276] 0 containers: []
	W0719 19:29:17.294843   65113 logs.go:278] No container was found matching "etcd"
	I0719 19:29:17.294848   65113 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:29:17.294907   65113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:29:17.328134   65113 cri.go:89] found id: ""
	I0719 19:29:17.328161   65113 logs.go:276] 0 containers: []
	W0719 19:29:17.328168   65113 logs.go:278] No container was found matching "coredns"
	I0719 19:29:17.328173   65113 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:29:17.328219   65113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:29:17.364163   65113 cri.go:89] found id: ""
	I0719 19:29:17.364189   65113 logs.go:276] 0 containers: []
	W0719 19:29:17.364197   65113 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:29:17.364203   65113 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:29:17.364247   65113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:29:17.394203   65113 cri.go:89] found id: ""
	I0719 19:29:17.394230   65113 logs.go:276] 0 containers: []
	W0719 19:29:17.394239   65113 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:29:17.394245   65113 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:29:17.394290   65113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:29:17.427733   65113 cri.go:89] found id: ""
	I0719 19:29:17.427763   65113 logs.go:276] 0 containers: []
	W0719 19:29:17.427772   65113 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:29:17.427777   65113 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:29:17.427922   65113 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:29:17.459107   65113 cri.go:89] found id: ""
	I0719 19:29:17.459135   65113 logs.go:276] 0 containers: []
	W0719 19:29:17.459146   65113 logs.go:278] No container was found matching "kindnet"
	I0719 19:29:17.459163   65113 logs.go:123] Gathering logs for kubelet ...
	I0719 19:29:17.459178   65113 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:29:17.508505   65113 logs.go:123] Gathering logs for dmesg ...
	I0719 19:29:17.508536   65113 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:29:17.521136   65113 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:29:17.521165   65113 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:29:17.630674   65113 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:29:17.630695   65113 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:29:17.630710   65113 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:29:17.727001   65113 logs.go:123] Gathering logs for container status ...
	I0719 19:29:17.727040   65113 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0719 19:29:17.762797   65113 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0719 19:29:17.762842   65113 out.go:239] * 
	* 
	W0719 19:29:17.762892   65113 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0719 19:29:17.762919   65113 out.go:239] * 
	* 
	W0719 19:29:17.764165   65113 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 19:29:17.770367   65113 out.go:177] 
	W0719 19:29:17.771584   65113 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0719 19:29:17.771633   65113 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0719 19:29:17.771648   65113 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0719 19:29:17.773051   65113 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-570369 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-570369 -n old-k8s-version-570369
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-570369 -n old-k8s-version-570369: exit status 6 (240.883883ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 19:29:18.062337   68029 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-570369" does not appear in /home/jenkins/minikube-integration/19307-14841/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-570369" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (271.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-971041 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-971041 --alsologtostderr -v=3: exit status 82 (2m0.53362504s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-971041"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 19:26:46.333766   67080 out.go:291] Setting OutFile to fd 1 ...
	I0719 19:26:46.333890   67080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:26:46.333901   67080 out.go:304] Setting ErrFile to fd 2...
	I0719 19:26:46.333908   67080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:26:46.334089   67080 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 19:26:46.334330   67080 out.go:298] Setting JSON to false
	I0719 19:26:46.334418   67080 mustload.go:65] Loading cluster: no-preload-971041
	I0719 19:26:46.334739   67080 config.go:182] Loaded profile config "no-preload-971041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 19:26:46.334843   67080 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/config.json ...
	I0719 19:26:46.335013   67080 mustload.go:65] Loading cluster: no-preload-971041
	I0719 19:26:46.335131   67080 config.go:182] Loaded profile config "no-preload-971041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 19:26:46.335162   67080 stop.go:39] StopHost: no-preload-971041
	I0719 19:26:46.335770   67080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:26:46.335830   67080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:26:46.352854   67080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45295
	I0719 19:26:46.353315   67080 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:26:46.353944   67080 main.go:141] libmachine: Using API Version  1
	I0719 19:26:46.353964   67080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:26:46.354316   67080 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:26:46.356905   67080 out.go:177] * Stopping node "no-preload-971041"  ...
	I0719 19:26:46.358396   67080 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0719 19:26:46.358435   67080 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:26:46.358691   67080 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0719 19:26:46.358715   67080 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:26:46.362054   67080 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:26:46.362510   67080 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:25:27 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:26:46.362536   67080 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:26:46.362645   67080 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:26:46.362825   67080 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:26:46.362976   67080 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:26:46.363126   67080 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:26:46.470099   67080 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0719 19:26:46.531277   67080 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0719 19:26:46.602187   67080 main.go:141] libmachine: Stopping "no-preload-971041"...
	I0719 19:26:46.602219   67080 main.go:141] libmachine: (no-preload-971041) Calling .GetState
	I0719 19:26:46.604015   67080 main.go:141] libmachine: (no-preload-971041) Calling .Stop
	I0719 19:26:46.607953   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 0/120
	I0719 19:26:47.609342   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 1/120
	I0719 19:26:48.610789   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 2/120
	I0719 19:26:49.612782   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 3/120
	I0719 19:26:50.614575   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 4/120
	I0719 19:26:51.616876   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 5/120
	I0719 19:26:52.618301   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 6/120
	I0719 19:26:53.619660   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 7/120
	I0719 19:26:54.620958   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 8/120
	I0719 19:26:55.622397   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 9/120
	I0719 19:26:56.624928   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 10/120
	I0719 19:26:57.626531   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 11/120
	I0719 19:26:58.628918   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 12/120
	I0719 19:26:59.631073   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 13/120
	I0719 19:27:00.632489   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 14/120
	I0719 19:27:01.634780   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 15/120
	I0719 19:27:02.636576   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 16/120
	I0719 19:27:03.638390   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 17/120
	I0719 19:27:04.639739   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 18/120
	I0719 19:27:05.641380   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 19/120
	I0719 19:27:06.643115   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 20/120
	I0719 19:27:07.644992   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 21/120
	I0719 19:27:08.647078   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 22/120
	I0719 19:27:09.649273   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 23/120
	I0719 19:27:10.651168   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 24/120
	I0719 19:27:11.653037   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 25/120
	I0719 19:27:12.654352   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 26/120
	I0719 19:27:13.656781   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 27/120
	I0719 19:27:14.658357   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 28/120
	I0719 19:27:15.660375   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 29/120
	I0719 19:27:16.662350   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 30/120
	I0719 19:27:17.663924   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 31/120
	I0719 19:27:18.665355   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 32/120
	I0719 19:27:19.666926   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 33/120
	I0719 19:27:20.669290   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 34/120
	I0719 19:27:21.671308   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 35/120
	I0719 19:27:22.672777   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 36/120
	I0719 19:27:23.674667   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 37/120
	I0719 19:27:24.676659   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 38/120
	I0719 19:27:25.678918   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 39/120
	I0719 19:27:26.681126   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 40/120
	I0719 19:27:27.682338   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 41/120
	I0719 19:27:28.684537   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 42/120
	I0719 19:27:29.686347   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 43/120
	I0719 19:27:30.688175   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 44/120
	I0719 19:27:31.690251   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 45/120
	I0719 19:27:32.691641   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 46/120
	I0719 19:27:33.693954   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 47/120
	I0719 19:27:34.695335   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 48/120
	I0719 19:27:35.696662   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 49/120
	I0719 19:27:36.698626   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 50/120
	I0719 19:27:37.700105   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 51/120
	I0719 19:27:38.701559   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 52/120
	I0719 19:27:39.703082   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 53/120
	I0719 19:27:40.704694   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 54/120
	I0719 19:27:41.706758   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 55/120
	I0719 19:27:42.708184   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 56/120
	I0719 19:27:43.710338   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 57/120
	I0719 19:27:44.711876   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 58/120
	I0719 19:27:45.713419   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 59/120
	I0719 19:27:46.715776   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 60/120
	I0719 19:27:47.717182   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 61/120
	I0719 19:27:48.718548   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 62/120
	I0719 19:27:49.720268   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 63/120
	I0719 19:27:50.721825   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 64/120
	I0719 19:27:51.723858   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 65/120
	I0719 19:27:52.725433   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 66/120
	I0719 19:27:53.726802   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 67/120
	I0719 19:27:54.728159   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 68/120
	I0719 19:27:55.729509   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 69/120
	I0719 19:27:56.731156   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 70/120
	I0719 19:27:57.732660   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 71/120
	I0719 19:27:58.734346   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 72/120
	I0719 19:27:59.735628   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 73/120
	I0719 19:28:00.737190   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 74/120
	I0719 19:28:01.739406   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 75/120
	I0719 19:28:02.740865   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 76/120
	I0719 19:28:03.742234   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 77/120
	I0719 19:28:04.743631   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 78/120
	I0719 19:28:05.744965   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 79/120
	I0719 19:28:06.747212   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 80/120
	I0719 19:28:07.748882   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 81/120
	I0719 19:28:08.750319   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 82/120
	I0719 19:28:09.751689   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 83/120
	I0719 19:28:10.753179   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 84/120
	I0719 19:28:11.755457   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 85/120
	I0719 19:28:12.757021   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 86/120
	I0719 19:28:13.758358   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 87/120
	I0719 19:28:14.760028   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 88/120
	I0719 19:28:15.761510   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 89/120
	I0719 19:28:16.764016   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 90/120
	I0719 19:28:17.765523   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 91/120
	I0719 19:28:18.766873   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 92/120
	I0719 19:28:19.768320   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 93/120
	I0719 19:28:20.770402   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 94/120
	I0719 19:28:21.772777   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 95/120
	I0719 19:28:22.774283   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 96/120
	I0719 19:28:23.776166   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 97/120
	I0719 19:28:24.777672   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 98/120
	I0719 19:28:25.779263   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 99/120
	I0719 19:28:26.781445   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 100/120
	I0719 19:28:27.782711   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 101/120
	I0719 19:28:28.784482   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 102/120
	I0719 19:28:29.786115   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 103/120
	I0719 19:28:30.787598   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 104/120
	I0719 19:28:31.789905   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 105/120
	I0719 19:28:32.791463   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 106/120
	I0719 19:28:33.792958   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 107/120
	I0719 19:28:34.794469   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 108/120
	I0719 19:28:35.796109   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 109/120
	I0719 19:28:36.798415   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 110/120
	I0719 19:28:37.800121   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 111/120
	I0719 19:28:38.801600   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 112/120
	I0719 19:28:39.803032   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 113/120
	I0719 19:28:40.804616   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 114/120
	I0719 19:28:41.806519   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 115/120
	I0719 19:28:42.807862   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 116/120
	I0719 19:28:43.809357   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 117/120
	I0719 19:28:44.810930   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 118/120
	I0719 19:28:45.812464   67080 main.go:141] libmachine: (no-preload-971041) Waiting for machine to stop 119/120
	I0719 19:28:46.813214   67080 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0719 19:28:46.813312   67080 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0719 19:28:46.815358   67080 out.go:177] 
	W0719 19:28:46.816760   67080 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0719 19:28:46.816778   67080 out.go:239] * 
	* 
	W0719 19:28:46.819473   67080 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 19:28:46.821807   67080 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-971041 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-971041 -n no-preload-971041
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-971041 -n no-preload-971041: exit status 3 (18.584554452s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 19:29:05.408200   67810 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.119:22: connect: no route to host
	E0719 19:29:05.408217   67810 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.119:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-971041" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-578533 --alsologtostderr -v=3
E0719 19:27:32.426666   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-578533 --alsologtostderr -v=3: exit status 82 (2m0.520845322s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-578533"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 19:27:31.864729   67470 out.go:291] Setting OutFile to fd 1 ...
	I0719 19:27:31.864978   67470 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:27:31.864988   67470 out.go:304] Setting ErrFile to fd 2...
	I0719 19:27:31.864992   67470 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:27:31.865191   67470 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 19:27:31.865413   67470 out.go:298] Setting JSON to false
	I0719 19:27:31.865485   67470 mustload.go:65] Loading cluster: default-k8s-diff-port-578533
	I0719 19:27:31.865808   67470 config.go:182] Loaded profile config "default-k8s-diff-port-578533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:27:31.865885   67470 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/config.json ...
	I0719 19:27:31.866063   67470 mustload.go:65] Loading cluster: default-k8s-diff-port-578533
	I0719 19:27:31.866184   67470 config.go:182] Loaded profile config "default-k8s-diff-port-578533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:27:31.866227   67470 stop.go:39] StopHost: default-k8s-diff-port-578533
	I0719 19:27:31.866616   67470 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:27:31.866660   67470 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:27:31.880904   67470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46711
	I0719 19:27:31.881405   67470 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:27:31.881910   67470 main.go:141] libmachine: Using API Version  1
	I0719 19:27:31.881938   67470 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:27:31.882223   67470 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:27:31.884633   67470 out.go:177] * Stopping node "default-k8s-diff-port-578533"  ...
	I0719 19:27:31.885969   67470 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0719 19:27:31.886006   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:27:31.886218   67470 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0719 19:27:31.886242   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:27:31.889016   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:27:31.889510   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:26:40 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:27:31.889537   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:27:31.889702   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:27:31.889874   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:27:31.890033   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:27:31.890161   67470 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:27:32.005616   67470 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0719 19:27:32.070259   67470 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0719 19:27:32.134791   67470 main.go:141] libmachine: Stopping "default-k8s-diff-port-578533"...
	I0719 19:27:32.134827   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetState
	I0719 19:27:32.136528   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Stop
	I0719 19:27:32.139960   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 0/120
	I0719 19:27:33.141652   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 1/120
	I0719 19:27:34.143173   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 2/120
	I0719 19:27:35.144626   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 3/120
	I0719 19:27:36.146268   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 4/120
	I0719 19:27:37.148357   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 5/120
	I0719 19:27:38.150022   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 6/120
	I0719 19:27:39.151359   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 7/120
	I0719 19:27:40.153818   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 8/120
	I0719 19:27:41.155244   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 9/120
	I0719 19:27:42.156664   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 10/120
	I0719 19:27:43.158212   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 11/120
	I0719 19:27:44.159530   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 12/120
	I0719 19:27:45.160970   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 13/120
	I0719 19:27:46.162340   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 14/120
	I0719 19:27:47.164511   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 15/120
	I0719 19:27:48.166130   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 16/120
	I0719 19:27:49.167461   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 17/120
	I0719 19:27:50.169196   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 18/120
	I0719 19:27:51.170659   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 19/120
	I0719 19:27:52.173219   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 20/120
	I0719 19:27:53.174673   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 21/120
	I0719 19:27:54.176173   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 22/120
	I0719 19:27:55.177442   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 23/120
	I0719 19:27:56.179034   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 24/120
	I0719 19:27:57.181240   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 25/120
	I0719 19:27:58.182814   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 26/120
	I0719 19:27:59.184499   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 27/120
	I0719 19:28:00.185838   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 28/120
	I0719 19:28:01.187380   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 29/120
	I0719 19:28:02.188723   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 30/120
	I0719 19:28:03.190054   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 31/120
	I0719 19:28:04.191352   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 32/120
	I0719 19:28:05.192690   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 33/120
	I0719 19:28:06.194107   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 34/120
	I0719 19:28:07.196221   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 35/120
	I0719 19:28:08.197921   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 36/120
	I0719 19:28:09.199215   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 37/120
	I0719 19:28:10.200802   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 38/120
	I0719 19:28:11.202397   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 39/120
	I0719 19:28:12.204484   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 40/120
	I0719 19:28:13.205883   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 41/120
	I0719 19:28:14.207336   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 42/120
	I0719 19:28:15.209071   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 43/120
	I0719 19:28:16.210707   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 44/120
	I0719 19:28:17.212868   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 45/120
	I0719 19:28:18.214478   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 46/120
	I0719 19:28:19.216425   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 47/120
	I0719 19:28:20.217781   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 48/120
	I0719 19:28:21.219280   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 49/120
	I0719 19:28:22.221800   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 50/120
	I0719 19:28:23.223103   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 51/120
	I0719 19:28:24.224658   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 52/120
	I0719 19:28:25.226343   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 53/120
	I0719 19:28:26.227959   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 54/120
	I0719 19:28:27.229896   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 55/120
	I0719 19:28:28.231527   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 56/120
	I0719 19:28:29.232870   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 57/120
	I0719 19:28:30.234566   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 58/120
	I0719 19:28:31.235935   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 59/120
	I0719 19:28:32.238362   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 60/120
	I0719 19:28:33.240012   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 61/120
	I0719 19:28:34.241346   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 62/120
	I0719 19:28:35.242699   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 63/120
	I0719 19:28:36.244259   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 64/120
	I0719 19:28:37.246308   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 65/120
	I0719 19:28:38.247878   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 66/120
	I0719 19:28:39.249204   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 67/120
	I0719 19:28:40.250640   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 68/120
	I0719 19:28:41.252046   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 69/120
	I0719 19:28:42.254238   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 70/120
	I0719 19:28:43.255881   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 71/120
	I0719 19:28:44.257274   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 72/120
	I0719 19:28:45.258722   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 73/120
	I0719 19:28:46.260401   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 74/120
	I0719 19:28:47.262235   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 75/120
	I0719 19:28:48.263654   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 76/120
	I0719 19:28:49.265182   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 77/120
	I0719 19:28:50.266863   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 78/120
	I0719 19:28:51.268283   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 79/120
	I0719 19:28:52.270741   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 80/120
	I0719 19:28:53.272126   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 81/120
	I0719 19:28:54.273525   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 82/120
	I0719 19:28:55.275260   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 83/120
	I0719 19:28:56.276995   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 84/120
	I0719 19:28:57.279119   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 85/120
	I0719 19:28:58.280644   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 86/120
	I0719 19:28:59.282293   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 87/120
	I0719 19:29:00.283680   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 88/120
	I0719 19:29:01.285299   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 89/120
	I0719 19:29:02.287686   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 90/120
	I0719 19:29:03.289136   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 91/120
	I0719 19:29:04.290500   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 92/120
	I0719 19:29:05.291812   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 93/120
	I0719 19:29:06.293234   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 94/120
	I0719 19:29:07.295330   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 95/120
	I0719 19:29:08.296763   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 96/120
	I0719 19:29:09.298354   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 97/120
	I0719 19:29:10.299729   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 98/120
	I0719 19:29:11.301286   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 99/120
	I0719 19:29:12.303529   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 100/120
	I0719 19:29:13.305051   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 101/120
	I0719 19:29:14.306821   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 102/120
	I0719 19:29:15.308471   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 103/120
	I0719 19:29:16.310023   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 104/120
	I0719 19:29:17.312457   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 105/120
	I0719 19:29:18.314631   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 106/120
	I0719 19:29:19.316075   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 107/120
	I0719 19:29:20.318287   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 108/120
	I0719 19:29:21.319738   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 109/120
	I0719 19:29:22.321146   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 110/120
	I0719 19:29:23.322580   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 111/120
	I0719 19:29:24.324042   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 112/120
	I0719 19:29:25.325584   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 113/120
	I0719 19:29:26.327032   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 114/120
	I0719 19:29:27.329063   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 115/120
	I0719 19:29:28.330541   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 116/120
	I0719 19:29:29.332136   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 117/120
	I0719 19:29:30.333515   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 118/120
	I0719 19:29:31.335036   67470 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for machine to stop 119/120
	I0719 19:29:32.335586   67470 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0719 19:29:32.335657   67470 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0719 19:29:32.337414   67470 out.go:177] 
	W0719 19:29:32.338776   67470 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0719 19:29:32.338800   67470 out.go:239] * 
	* 
	W0719 19:29:32.341246   67470 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 19:29:32.342508   67470 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-578533 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-578533 -n default-k8s-diff-port-578533
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-578533 -n default-k8s-diff-port-578533: exit status 3 (18.631680431s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 19:29:50.976196   68234 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.104:22: connect: no route to host
	E0719 19:29:50.976218   68234 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.104:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-578533" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-889918 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-889918 --alsologtostderr -v=3: exit status 82 (2m0.510363819s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-889918"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 19:27:34.944786   67554 out.go:291] Setting OutFile to fd 1 ...
	I0719 19:27:34.944886   67554 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:27:34.944896   67554 out.go:304] Setting ErrFile to fd 2...
	I0719 19:27:34.944900   67554 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:27:34.945082   67554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 19:27:34.945322   67554 out.go:298] Setting JSON to false
	I0719 19:27:34.945414   67554 mustload.go:65] Loading cluster: embed-certs-889918
	I0719 19:27:34.945771   67554 config.go:182] Loaded profile config "embed-certs-889918": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:27:34.945852   67554 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/config.json ...
	I0719 19:27:34.946024   67554 mustload.go:65] Loading cluster: embed-certs-889918
	I0719 19:27:34.946151   67554 config.go:182] Loaded profile config "embed-certs-889918": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:27:34.946187   67554 stop.go:39] StopHost: embed-certs-889918
	I0719 19:27:34.946621   67554 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:27:34.946671   67554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:27:34.963888   67554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40139
	I0719 19:27:34.964378   67554 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:27:34.964966   67554 main.go:141] libmachine: Using API Version  1
	I0719 19:27:34.964987   67554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:27:34.965428   67554 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:27:34.967616   67554 out.go:177] * Stopping node "embed-certs-889918"  ...
	I0719 19:27:34.968793   67554 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0719 19:27:34.968833   67554 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:27:34.969118   67554 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0719 19:27:34.969155   67554 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:27:34.972175   67554 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:27:34.972583   67554 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:26:01 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:27:34.972609   67554 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:27:34.972753   67554 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:27:34.972904   67554 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:27:34.973060   67554 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:27:34.973195   67554 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:27:35.057770   67554 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0719 19:27:35.149392   67554 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0719 19:27:35.212064   67554 main.go:141] libmachine: Stopping "embed-certs-889918"...
	I0719 19:27:35.212123   67554 main.go:141] libmachine: (embed-certs-889918) Calling .GetState
	I0719 19:27:35.213841   67554 main.go:141] libmachine: (embed-certs-889918) Calling .Stop
	I0719 19:27:35.217347   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 0/120
	I0719 19:27:36.218754   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 1/120
	I0719 19:27:37.220218   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 2/120
	I0719 19:27:38.221757   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 3/120
	I0719 19:27:39.223187   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 4/120
	I0719 19:27:40.225086   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 5/120
	I0719 19:27:41.226585   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 6/120
	I0719 19:27:42.227970   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 7/120
	I0719 19:27:43.230383   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 8/120
	I0719 19:27:44.231673   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 9/120
	I0719 19:27:45.234023   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 10/120
	I0719 19:27:46.235285   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 11/120
	I0719 19:27:47.236480   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 12/120
	I0719 19:27:48.237836   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 13/120
	I0719 19:27:49.239282   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 14/120
	I0719 19:27:50.241116   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 15/120
	I0719 19:27:51.242543   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 16/120
	I0719 19:27:52.243887   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 17/120
	I0719 19:27:53.245164   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 18/120
	I0719 19:27:54.246442   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 19/120
	I0719 19:27:55.248292   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 20/120
	I0719 19:27:56.249680   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 21/120
	I0719 19:27:57.251129   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 22/120
	I0719 19:27:58.252809   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 23/120
	I0719 19:27:59.254519   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 24/120
	I0719 19:28:00.256314   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 25/120
	I0719 19:28:01.257695   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 26/120
	I0719 19:28:02.259040   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 27/120
	I0719 19:28:03.260742   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 28/120
	I0719 19:28:04.262141   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 29/120
	I0719 19:28:05.264359   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 30/120
	I0719 19:28:06.265969   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 31/120
	I0719 19:28:07.267493   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 32/120
	I0719 19:28:08.269160   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 33/120
	I0719 19:28:09.270439   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 34/120
	I0719 19:28:10.272621   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 35/120
	I0719 19:28:11.274076   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 36/120
	I0719 19:28:12.275847   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 37/120
	I0719 19:28:13.277338   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 38/120
	I0719 19:28:14.278507   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 39/120
	I0719 19:28:15.280824   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 40/120
	I0719 19:28:16.282702   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 41/120
	I0719 19:28:17.284466   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 42/120
	I0719 19:28:18.285971   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 43/120
	I0719 19:28:19.287179   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 44/120
	I0719 19:28:20.289070   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 45/120
	I0719 19:28:21.290461   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 46/120
	I0719 19:28:22.291951   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 47/120
	I0719 19:28:23.293682   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 48/120
	I0719 19:28:24.294957   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 49/120
	I0719 19:28:25.297067   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 50/120
	I0719 19:28:26.298965   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 51/120
	I0719 19:28:27.300622   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 52/120
	I0719 19:28:28.302192   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 53/120
	I0719 19:28:29.303507   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 54/120
	I0719 19:28:30.305500   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 55/120
	I0719 19:28:31.307167   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 56/120
	I0719 19:28:32.308707   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 57/120
	I0719 19:28:33.310334   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 58/120
	I0719 19:28:34.311512   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 59/120
	I0719 19:28:35.314077   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 60/120
	I0719 19:28:36.315391   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 61/120
	I0719 19:28:37.316888   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 62/120
	I0719 19:28:38.318279   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 63/120
	I0719 19:28:39.319687   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 64/120
	I0719 19:28:40.321991   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 65/120
	I0719 19:28:41.323400   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 66/120
	I0719 19:28:42.325122   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 67/120
	I0719 19:28:43.326602   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 68/120
	I0719 19:28:44.328151   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 69/120
	I0719 19:28:45.330420   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 70/120
	I0719 19:28:46.331667   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 71/120
	I0719 19:28:47.333425   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 72/120
	I0719 19:28:48.334809   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 73/120
	I0719 19:28:49.336174   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 74/120
	I0719 19:28:50.338397   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 75/120
	I0719 19:28:51.339982   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 76/120
	I0719 19:28:52.341731   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 77/120
	I0719 19:28:53.343077   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 78/120
	I0719 19:28:54.344566   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 79/120
	I0719 19:28:55.346038   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 80/120
	I0719 19:28:56.347311   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 81/120
	I0719 19:28:57.348793   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 82/120
	I0719 19:28:58.350106   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 83/120
	I0719 19:28:59.351710   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 84/120
	I0719 19:29:00.353969   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 85/120
	I0719 19:29:01.355249   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 86/120
	I0719 19:29:02.356577   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 87/120
	I0719 19:29:03.357804   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 88/120
	I0719 19:29:04.359212   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 89/120
	I0719 19:29:05.361634   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 90/120
	I0719 19:29:06.363040   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 91/120
	I0719 19:29:07.364871   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 92/120
	I0719 19:29:08.366316   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 93/120
	I0719 19:29:09.367640   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 94/120
	I0719 19:29:10.369727   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 95/120
	I0719 19:29:11.371146   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 96/120
	I0719 19:29:12.372690   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 97/120
	I0719 19:29:13.374147   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 98/120
	I0719 19:29:14.375376   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 99/120
	I0719 19:29:15.377488   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 100/120
	I0719 19:29:16.378952   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 101/120
	I0719 19:29:17.380395   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 102/120
	I0719 19:29:18.382285   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 103/120
	I0719 19:29:19.383602   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 104/120
	I0719 19:29:20.385684   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 105/120
	I0719 19:29:21.387093   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 106/120
	I0719 19:29:22.388649   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 107/120
	I0719 19:29:23.389955   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 108/120
	I0719 19:29:24.391175   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 109/120
	I0719 19:29:25.393227   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 110/120
	I0719 19:29:26.394607   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 111/120
	I0719 19:29:27.396004   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 112/120
	I0719 19:29:28.397250   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 113/120
	I0719 19:29:29.398679   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 114/120
	I0719 19:29:30.400728   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 115/120
	I0719 19:29:31.402275   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 116/120
	I0719 19:29:32.403499   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 117/120
	I0719 19:29:33.404989   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 118/120
	I0719 19:29:34.406467   67554 main.go:141] libmachine: (embed-certs-889918) Waiting for machine to stop 119/120
	I0719 19:29:35.407194   67554 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0719 19:29:35.407271   67554 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0719 19:29:35.409267   67554 out.go:177] 
	W0719 19:29:35.410673   67554 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0719 19:29:35.410695   67554 out.go:239] * 
	* 
	W0719 19:29:35.413257   67554 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 19:29:35.414468   67554 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-889918 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-889918 -n embed-certs-889918
E0719 19:29:43.524925   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/functional-788436/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-889918 -n embed-certs-889918: exit status 3 (18.630966407s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 19:29:54.048179   68280 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.107:22: connect: no route to host
	E0719 19:29:54.048205   68280 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.107:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-889918" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-971041 -n no-preload-971041
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-971041 -n no-preload-971041: exit status 3 (3.167867047s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 19:29:08.576156   67906 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.119:22: connect: no route to host
	E0719 19:29:08.576174   67906 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.119:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-971041 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-971041 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.15274405s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.119:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-971041 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-971041 -n no-preload-971041
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-971041 -n no-preload-971041: exit status 3 (3.068033521s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 19:29:17.796182   67987 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.119:22: connect: no route to host
	E0719 19:29:17.796203   67987 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.119:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-971041" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-570369 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-570369 create -f testdata/busybox.yaml: exit status 1 (43.34612ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-570369" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-570369 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-570369 -n old-k8s-version-570369
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-570369 -n old-k8s-version-570369: exit status 6 (217.165835ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 19:29:18.323724   68097 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-570369" does not appear in /home/jenkins/minikube-integration/19307-14841/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-570369" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-570369 -n old-k8s-version-570369
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-570369 -n old-k8s-version-570369: exit status 6 (212.689461ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 19:29:18.536864   68127 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-570369" does not appear in /home/jenkins/minikube-integration/19307-14841/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-570369" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (98.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-570369 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-570369 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m38.704378937s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-570369 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-570369 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-570369 describe deploy/metrics-server -n kube-system: exit status 1 (45.125094ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-570369" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-570369 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-570369 -n old-k8s-version-570369
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-570369 -n old-k8s-version-570369: exit status 6 (218.379357ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 19:30:57.503579   68866 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-570369" does not appear in /home/jenkins/minikube-integration/19307-14841/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-570369" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (98.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-578533 -n default-k8s-diff-port-578533
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-578533 -n default-k8s-diff-port-578533: exit status 3 (3.167745665s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 19:29:54.144148   68379 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.104:22: connect: no route to host
	E0719 19:29:54.144164   68379 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.104:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-578533 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-578533 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152517055s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.104:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-578533 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-578533 -n default-k8s-diff-port-578533
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-578533 -n default-k8s-diff-port-578533: exit status 3 (3.063135176s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 19:30:03.360253   68507 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.104:22: connect: no route to host
	E0719 19:30:03.360283   68507 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.104:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-578533" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-889918 -n embed-certs-889918
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-889918 -n embed-certs-889918: exit status 3 (3.16824282s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 19:29:57.216186   68409 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.107:22: connect: no route to host
	E0719 19:29:57.216211   68409 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.107:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-889918 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-889918 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153333043s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.107:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-889918 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-889918 -n embed-certs-889918
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-889918 -n embed-certs-889918: exit status 3 (3.062274772s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 19:30:06.432222   68542 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.107:22: connect: no route to host
	E0719 19:30:06.432258   68542 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.107:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-889918" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (727.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-570369 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0719 19:32:32.425973   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
E0719 19:33:55.473912   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
E0719 19:34:43.525074   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/functional-788436/client.crt: no such file or directory
E0719 19:37:32.426301   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-570369 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m4.204968483s)

                                                
                                                
-- stdout --
	* [old-k8s-version-570369] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19307
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19307-14841/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19307-14841/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-570369" primary control-plane node in "old-k8s-version-570369" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-570369" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 19:31:02.001058   68995 out.go:291] Setting OutFile to fd 1 ...
	I0719 19:31:02.001478   68995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:31:02.001493   68995 out.go:304] Setting ErrFile to fd 2...
	I0719 19:31:02.001500   68995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:31:02.001955   68995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 19:31:02.002951   68995 out.go:298] Setting JSON to false
	I0719 19:31:02.003871   68995 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8005,"bootTime":1721409457,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 19:31:02.003932   68995 start.go:139] virtualization: kvm guest
	I0719 19:31:02.005683   68995 out.go:177] * [old-k8s-version-570369] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 19:31:02.007426   68995 notify.go:220] Checking for updates...
	I0719 19:31:02.007440   68995 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 19:31:02.008887   68995 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 19:31:02.010101   68995 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:31:02.011382   68995 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 19:31:02.012573   68995 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 19:31:02.013796   68995 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 19:31:02.015353   68995 config.go:182] Loaded profile config "old-k8s-version-570369": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0719 19:31:02.015711   68995 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:31:02.015750   68995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:31:02.030334   68995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42673
	I0719 19:31:02.030703   68995 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:31:02.031167   68995 main.go:141] libmachine: Using API Version  1
	I0719 19:31:02.031204   68995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:31:02.031520   68995 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:31:02.031688   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:31:02.033460   68995 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0719 19:31:02.034684   68995 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 19:31:02.034983   68995 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:31:02.035018   68995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:31:02.049101   68995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44905
	I0719 19:31:02.049495   68995 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:31:02.049992   68995 main.go:141] libmachine: Using API Version  1
	I0719 19:31:02.050006   68995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:31:02.050295   68995 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:31:02.050462   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:31:02.085393   68995 out.go:177] * Using the kvm2 driver based on existing profile
	I0719 19:31:02.086642   68995 start.go:297] selected driver: kvm2
	I0719 19:31:02.086658   68995 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-570369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:31:02.086766   68995 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 19:31:02.087430   68995 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 19:31:02.087497   68995 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19307-14841/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 19:31:02.102357   68995 install.go:137] /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0719 19:31:02.102731   68995 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 19:31:02.102797   68995 cni.go:84] Creating CNI manager for ""
	I0719 19:31:02.102814   68995 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:31:02.102857   68995 start.go:340] cluster config:
	{Name:old-k8s-version-570369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570369 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:31:02.102975   68995 iso.go:125] acquiring lock: {Name:mka126a3710ab8a115eb2a3299b735524430e5f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 19:31:02.104839   68995 out.go:177] * Starting "old-k8s-version-570369" primary control-plane node in "old-k8s-version-570369" cluster
	I0719 19:31:02.106080   68995 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 19:31:02.106120   68995 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0719 19:31:02.106127   68995 cache.go:56] Caching tarball of preloaded images
	I0719 19:31:02.106200   68995 preload.go:172] Found /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 19:31:02.106217   68995 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0719 19:31:02.106343   68995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/config.json ...
	I0719 19:31:02.106543   68995 start.go:360] acquireMachinesLock for old-k8s-version-570369: {Name:mk45aeee801587237bb965fa260501e9fac6f233 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 19:34:34.468635   68995 start.go:364] duration metric: took 3m32.362055744s to acquireMachinesLock for "old-k8s-version-570369"
	I0719 19:34:34.468688   68995 start.go:96] Skipping create...Using existing machine configuration
	I0719 19:34:34.468696   68995 fix.go:54] fixHost starting: 
	I0719 19:34:34.469206   68995 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:34.469246   68995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:34.488843   68995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36477
	I0719 19:34:34.489238   68995 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:34.489682   68995 main.go:141] libmachine: Using API Version  1
	I0719 19:34:34.489708   68995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:34.490007   68995 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:34.490191   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:34.490327   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetState
	I0719 19:34:34.491830   68995 fix.go:112] recreateIfNeeded on old-k8s-version-570369: state=Stopped err=<nil>
	I0719 19:34:34.491875   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	W0719 19:34:34.492029   68995 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 19:34:34.493991   68995 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-570369" ...
	I0719 19:34:34.495371   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .Start
	I0719 19:34:34.495553   68995 main.go:141] libmachine: (old-k8s-version-570369) Ensuring networks are active...
	I0719 19:34:34.496391   68995 main.go:141] libmachine: (old-k8s-version-570369) Ensuring network default is active
	I0719 19:34:34.496864   68995 main.go:141] libmachine: (old-k8s-version-570369) Ensuring network mk-old-k8s-version-570369 is active
	I0719 19:34:34.497347   68995 main.go:141] libmachine: (old-k8s-version-570369) Getting domain xml...
	I0719 19:34:34.498147   68995 main.go:141] libmachine: (old-k8s-version-570369) Creating domain...
	I0719 19:34:35.772867   68995 main.go:141] libmachine: (old-k8s-version-570369) Waiting to get IP...
	I0719 19:34:35.773913   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:35.774431   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:35.774493   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:35.774403   69904 retry.go:31] will retry after 240.258158ms: waiting for machine to come up
	I0719 19:34:36.016002   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:36.016573   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:36.016602   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:36.016519   69904 retry.go:31] will retry after 386.689334ms: waiting for machine to come up
	I0719 19:34:36.405411   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:36.405914   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:36.405957   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:36.405885   69904 retry.go:31] will retry after 295.472664ms: waiting for machine to come up
	I0719 19:34:36.703587   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:36.704082   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:36.704113   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:36.704040   69904 retry.go:31] will retry after 436.936059ms: waiting for machine to come up
	I0719 19:34:37.142840   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:37.143334   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:37.143364   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:37.143287   69904 retry.go:31] will retry after 500.03346ms: waiting for machine to come up
	I0719 19:34:37.645270   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:37.645920   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:37.645949   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:37.645856   69904 retry.go:31] will retry after 803.488078ms: waiting for machine to come up
	I0719 19:34:38.450546   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:38.451043   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:38.451062   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:38.451000   69904 retry.go:31] will retry after 791.584627ms: waiting for machine to come up
	I0719 19:34:39.244154   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:39.244564   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:39.244597   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:39.244541   69904 retry.go:31] will retry after 1.090004215s: waiting for machine to come up
	I0719 19:34:40.335726   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:40.336278   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:40.336304   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:40.336240   69904 retry.go:31] will retry after 1.629231127s: waiting for machine to come up
	I0719 19:34:41.966947   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:41.967475   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:41.967497   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:41.967436   69904 retry.go:31] will retry after 2.06824211s: waiting for machine to come up
	I0719 19:34:44.038853   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:44.039330   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:44.039352   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:44.039303   69904 retry.go:31] will retry after 2.688374782s: waiting for machine to come up
	I0719 19:34:46.730865   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:46.731354   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:46.731377   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:46.731316   69904 retry.go:31] will retry after 3.054833322s: waiting for machine to come up
	I0719 19:34:49.787569   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:49.788100   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:49.788136   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:49.788058   69904 retry.go:31] will retry after 4.206772692s: waiting for machine to come up
	I0719 19:34:53.999402   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.000105   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has current primary IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.000129   68995 main.go:141] libmachine: (old-k8s-version-570369) Found IP for machine: 192.168.39.220
	I0719 19:34:54.000144   68995 main.go:141] libmachine: (old-k8s-version-570369) Reserving static IP address...
	I0719 19:34:54.000535   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "old-k8s-version-570369", mac: "52:54:00:80:e3:2b", ip: "192.168.39.220"} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.000738   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | skip adding static IP to network mk-old-k8s-version-570369 - found existing host DHCP lease matching {name: "old-k8s-version-570369", mac: "52:54:00:80:e3:2b", ip: "192.168.39.220"}
	I0719 19:34:54.000786   68995 main.go:141] libmachine: (old-k8s-version-570369) Reserved static IP address: 192.168.39.220
	I0719 19:34:54.000799   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | Getting to WaitForSSH function...
	I0719 19:34:54.000819   68995 main.go:141] libmachine: (old-k8s-version-570369) Waiting for SSH to be available...
	I0719 19:34:54.003082   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.003489   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.003519   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.003700   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | Using SSH client type: external
	I0719 19:34:54.003733   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | Using SSH private key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa (-rw-------)
	I0719 19:34:54.003769   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.220 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 19:34:54.003786   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | About to run SSH command:
	I0719 19:34:54.003802   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | exit 0
	I0719 19:34:54.131809   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | SSH cmd err, output: <nil>: 
	I0719 19:34:54.132235   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetConfigRaw
	I0719 19:34:54.132946   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetIP
	I0719 19:34:54.135584   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.135977   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.136011   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.136256   68995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/config.json ...
	I0719 19:34:54.136443   68995 machine.go:94] provisionDockerMachine start ...
	I0719 19:34:54.136465   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:54.136815   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.139347   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.139705   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.139733   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.139951   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:54.140132   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.140272   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.140567   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:54.140754   68995 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:54.141006   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:34:54.141018   68995 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 19:34:54.251997   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 19:34:54.252035   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetMachineName
	I0719 19:34:54.252283   68995 buildroot.go:166] provisioning hostname "old-k8s-version-570369"
	I0719 19:34:54.252310   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetMachineName
	I0719 19:34:54.252568   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.255414   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.255812   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.255860   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.256041   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:54.256232   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.256399   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.256586   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:54.256793   68995 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:54.257004   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:34:54.257019   68995 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-570369 && echo "old-k8s-version-570369" | sudo tee /etc/hostname
	I0719 19:34:54.381363   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-570369
	
	I0719 19:34:54.381405   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.384406   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.384697   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.384724   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.384866   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:54.385050   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.385226   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.385399   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:54.385554   68995 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:54.385735   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:34:54.385752   68995 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-570369' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-570369/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-570369' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 19:34:54.504713   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:34:54.504740   68995 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 19:34:54.504778   68995 buildroot.go:174] setting up certificates
	I0719 19:34:54.504786   68995 provision.go:84] configureAuth start
	I0719 19:34:54.504796   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetMachineName
	I0719 19:34:54.505040   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetIP
	I0719 19:34:54.507978   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.508375   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.508407   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.508594   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.510609   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.510923   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.510952   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.511003   68995 provision.go:143] copyHostCerts
	I0719 19:34:54.511068   68995 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 19:34:54.511079   68995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 19:34:54.511147   68995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 19:34:54.511250   68995 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 19:34:54.511260   68995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 19:34:54.511290   68995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 19:34:54.511365   68995 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 19:34:54.511374   68995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 19:34:54.511404   68995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 19:34:54.511470   68995 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-570369 san=[127.0.0.1 192.168.39.220 localhost minikube old-k8s-version-570369]
	I0719 19:34:54.759014   68995 provision.go:177] copyRemoteCerts
	I0719 19:34:54.759064   68995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 19:34:54.759089   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.761770   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.762089   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.762133   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.762262   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:54.762483   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.762633   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:54.762744   68995 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa Username:docker}
	I0719 19:34:54.849819   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0719 19:34:54.874113   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 19:34:54.897979   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 19:34:54.921635   68995 provision.go:87] duration metric: took 416.834667ms to configureAuth
	I0719 19:34:54.921669   68995 buildroot.go:189] setting minikube options for container-runtime
	I0719 19:34:54.921904   68995 config.go:182] Loaded profile config "old-k8s-version-570369": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0719 19:34:54.922004   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.924798   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.925091   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.925121   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.925308   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:54.925611   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.925801   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.925980   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:54.926180   68995 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:54.926442   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:34:54.926471   68995 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 19:34:55.219746   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 19:34:55.219774   68995 machine.go:97] duration metric: took 1.083316626s to provisionDockerMachine
	I0719 19:34:55.219788   68995 start.go:293] postStartSetup for "old-k8s-version-570369" (driver="kvm2")
	I0719 19:34:55.219802   68995 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 19:34:55.219823   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:55.220207   68995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 19:34:55.220250   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:55.223470   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.223803   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:55.223847   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.224015   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:55.224253   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:55.224507   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:55.224686   68995 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa Username:docker}
	I0719 19:34:55.314430   68995 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 19:34:55.318435   68995 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 19:34:55.318461   68995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 19:34:55.318573   68995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 19:34:55.318696   68995 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 19:34:55.318818   68995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 19:34:55.329180   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:34:55.351452   68995 start.go:296] duration metric: took 131.650561ms for postStartSetup
	I0719 19:34:55.351493   68995 fix.go:56] duration metric: took 20.882796261s for fixHost
	I0719 19:34:55.351516   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:55.354069   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.354429   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:55.354449   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.354612   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:55.354786   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:55.354936   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:55.355083   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:55.355295   68995 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:55.355516   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:34:55.355528   68995 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0719 19:34:55.468304   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721417695.444458011
	
	I0719 19:34:55.468347   68995 fix.go:216] guest clock: 1721417695.444458011
	I0719 19:34:55.468357   68995 fix.go:229] Guest: 2024-07-19 19:34:55.444458011 +0000 UTC Remote: 2024-07-19 19:34:55.351498388 +0000 UTC m=+233.383860409 (delta=92.959623ms)
	I0719 19:34:55.468400   68995 fix.go:200] guest clock delta is within tolerance: 92.959623ms
	I0719 19:34:55.468407   68995 start.go:83] releasing machines lock for "old-k8s-version-570369", held for 20.999740081s
	I0719 19:34:55.468436   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:55.468696   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetIP
	I0719 19:34:55.471650   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.472087   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:55.472126   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.472376   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:55.472870   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:55.473063   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:55.473145   68995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 19:34:55.473190   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:55.473364   68995 ssh_runner.go:195] Run: cat /version.json
	I0719 19:34:55.473392   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:55.476321   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.476682   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.476716   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:55.476736   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.476847   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:55.477024   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:55.477123   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:55.477149   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.477183   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:55.477291   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:55.477341   68995 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa Username:docker}
	I0719 19:34:55.477440   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:55.477555   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:55.477729   68995 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa Username:docker}
	I0719 19:34:55.573427   68995 ssh_runner.go:195] Run: systemctl --version
	I0719 19:34:55.602019   68995 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 19:34:55.759356   68995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 19:34:55.766310   68995 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 19:34:55.766406   68995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 19:34:55.786237   68995 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 19:34:55.786261   68995 start.go:495] detecting cgroup driver to use...
	I0719 19:34:55.786335   68995 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 19:34:55.804512   68995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 19:34:55.818026   68995 docker.go:217] disabling cri-docker service (if available) ...
	I0719 19:34:55.818081   68995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 19:34:55.832737   68995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 19:34:55.848018   68995 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 19:34:55.972092   68995 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 19:34:56.119305   68995 docker.go:233] disabling docker service ...
	I0719 19:34:56.119390   68995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 19:34:56.132814   68995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 19:34:56.145281   68995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 19:34:56.285671   68995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 19:34:56.413786   68995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 19:34:56.428220   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 19:34:56.447626   68995 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0719 19:34:56.447679   68995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:56.458325   68995 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 19:34:56.458419   68995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:56.467977   68995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:56.477127   68995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:56.487342   68995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 19:34:56.499618   68995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 19:34:56.510085   68995 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 19:34:56.510149   68995 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 19:34:56.524045   68995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 19:34:56.533806   68995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:34:56.671207   68995 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 19:34:56.844623   68995 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 19:34:56.844688   68995 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 19:34:56.849762   68995 start.go:563] Will wait 60s for crictl version
	I0719 19:34:56.849829   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:34:56.853768   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 19:34:56.896644   68995 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 19:34:56.896768   68995 ssh_runner.go:195] Run: crio --version
	I0719 19:34:56.924343   68995 ssh_runner.go:195] Run: crio --version
	I0719 19:34:56.952939   68995 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0719 19:34:56.954228   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetIP
	I0719 19:34:56.957173   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:56.957539   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:56.957571   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:56.957784   68995 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 19:34:56.961877   68995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:34:56.973917   68995 kubeadm.go:883] updating cluster {Name:old-k8s-version-570369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-570369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 19:34:56.974058   68995 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 19:34:56.974286   68995 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:34:57.028362   68995 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 19:34:57.028462   68995 ssh_runner.go:195] Run: which lz4
	I0719 19:34:57.032617   68995 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0719 19:34:57.037102   68995 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 19:34:57.037142   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0719 19:34:58.509147   68995 crio.go:462] duration metric: took 1.476562763s to copy over tarball
	I0719 19:34:58.509223   68995 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 19:35:01.576920   68995 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.067667619s)
	I0719 19:35:01.576949   68995 crio.go:469] duration metric: took 3.067772722s to extract the tarball
	I0719 19:35:01.576959   68995 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 19:35:01.622394   68995 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:35:01.658103   68995 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 19:35:01.658135   68995 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 19:35:01.658233   68995 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:01.658257   68995 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:35:01.658262   68995 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:35:01.658236   68995 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0719 19:35:01.658235   68995 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0719 19:35:01.658233   68995 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:35:01.658247   68995 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:35:01.658433   68995 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0719 19:35:01.660336   68995 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0719 19:35:01.660361   68995 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:35:01.660343   68995 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:35:01.660333   68995 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:35:01.660339   68995 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0719 19:35:01.660340   68995 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:35:01.660333   68995 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:01.660714   68995 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0719 19:35:01.895704   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0719 19:35:01.907901   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:35:01.908071   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:35:01.923590   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0719 19:35:01.923831   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:35:01.955327   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:35:01.983003   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0719 19:35:02.004356   68995 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0719 19:35:02.004410   68995 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0719 19:35:02.004451   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.076774   68995 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0719 19:35:02.076820   68995 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:35:02.076871   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.077195   68995 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0719 19:35:02.077230   68995 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:35:02.077274   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.094076   68995 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0719 19:35:02.094105   68995 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0719 19:35:02.094118   68995 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:35:02.094148   68995 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0719 19:35:02.094169   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.094186   68995 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:35:02.094156   68995 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0719 19:35:02.094233   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.094246   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.100381   68995 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0719 19:35:02.100411   68995 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0719 19:35:02.100416   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0719 19:35:02.100449   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.100545   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:35:02.100547   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:35:02.106653   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:35:02.106700   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0719 19:35:02.106781   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:35:02.214163   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0719 19:35:02.214249   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0719 19:35:02.216256   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0719 19:35:02.216348   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0719 19:35:02.247645   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0719 19:35:02.247723   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0719 19:35:02.247818   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0719 19:35:02.265494   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0719 19:35:02.465885   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:02.607013   68995 cache_images.go:92] duration metric: took 948.858667ms to LoadCachedImages
	W0719 19:35:02.607143   68995 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0719 19:35:02.607167   68995 kubeadm.go:934] updating node { 192.168.39.220 8443 v1.20.0 crio true true} ...
	I0719 19:35:02.607303   68995 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-570369 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 19:35:02.607399   68995 ssh_runner.go:195] Run: crio config
	I0719 19:35:02.661274   68995 cni.go:84] Creating CNI manager for ""
	I0719 19:35:02.661306   68995 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:35:02.661326   68995 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 19:35:02.661353   68995 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.220 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-570369 NodeName:old-k8s-version-570369 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.220"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.220 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0719 19:35:02.661525   68995 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.220
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-570369"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.220
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.220"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 19:35:02.661599   68995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0719 19:35:02.671792   68995 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 19:35:02.671878   68995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 19:35:02.683593   68995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0719 19:35:02.703448   68995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 19:35:02.723494   68995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0719 19:35:02.741880   68995 ssh_runner.go:195] Run: grep 192.168.39.220	control-plane.minikube.internal$ /etc/hosts
	I0719 19:35:02.745968   68995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.220	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:35:02.759015   68995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:35:02.894896   68995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:35:02.912243   68995 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369 for IP: 192.168.39.220
	I0719 19:35:02.912272   68995 certs.go:194] generating shared ca certs ...
	I0719 19:35:02.912295   68995 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:35:02.912468   68995 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 19:35:02.912520   68995 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 19:35:02.912532   68995 certs.go:256] generating profile certs ...
	I0719 19:35:02.912660   68995 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/client.key
	I0719 19:35:02.912732   68995 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.key.41bb110d
	I0719 19:35:02.912784   68995 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/proxy-client.key
	I0719 19:35:02.912919   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 19:35:02.912959   68995 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 19:35:02.912973   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 19:35:02.913006   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 19:35:02.913040   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 19:35:02.913070   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 19:35:02.913125   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:35:02.914001   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 19:35:02.962668   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 19:35:02.997316   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 19:35:03.033868   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 19:35:03.073913   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0719 19:35:03.120607   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 19:35:03.156647   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 19:35:03.196763   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 19:35:03.233170   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 19:35:03.260949   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 19:35:03.286207   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 19:35:03.312158   68995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 19:35:03.333544   68995 ssh_runner.go:195] Run: openssl version
	I0719 19:35:03.339422   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 19:35:03.353775   68995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 19:35:03.359494   68995 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 19:35:03.359597   68995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 19:35:03.365926   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 19:35:03.380164   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 19:35:03.394069   68995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 19:35:03.398358   68995 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 19:35:03.398410   68995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 19:35:03.404158   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 19:35:03.415159   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 19:35:03.425887   68995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:35:03.430584   68995 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:35:03.430641   68995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:35:03.436201   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 19:35:03.448497   68995 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 19:35:03.454026   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 19:35:03.461326   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 19:35:03.467492   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 19:35:03.473959   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 19:35:03.480337   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 19:35:03.487652   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 19:35:03.494941   68995 kubeadm.go:392] StartCluster: {Name:old-k8s-version-570369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-570369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:35:03.495049   68995 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 19:35:03.495095   68995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:35:03.539527   68995 cri.go:89] found id: ""
	I0719 19:35:03.539603   68995 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 19:35:03.550212   68995 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 19:35:03.550232   68995 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 19:35:03.550290   68995 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 19:35:03.562250   68995 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 19:35:03.563624   68995 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-570369" does not appear in /home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:35:03.564665   68995 kubeconfig.go:62] /home/jenkins/minikube-integration/19307-14841/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-570369" cluster setting kubeconfig missing "old-k8s-version-570369" context setting]
	I0719 19:35:03.566027   68995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/kubeconfig: {Name:mk0bbb5f0de0e816a151363ad4427bbf9de52f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:35:03.639264   68995 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 19:35:03.651651   68995 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.220
	I0719 19:35:03.651684   68995 kubeadm.go:1160] stopping kube-system containers ...
	I0719 19:35:03.651698   68995 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 19:35:03.651753   68995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:35:03.694507   68995 cri.go:89] found id: ""
	I0719 19:35:03.694575   68995 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 19:35:03.713478   68995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:35:03.724717   68995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:35:03.724744   68995 kubeadm.go:157] found existing configuration files:
	
	I0719 19:35:03.724803   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:35:03.736032   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:35:03.736099   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:35:03.746972   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:35:03.757246   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:35:03.757384   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:35:03.766980   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:35:03.776428   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:35:03.776493   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:35:03.788238   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:35:03.799163   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:35:03.799230   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:35:03.810607   68995 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:35:03.820069   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:04.035635   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:04.678453   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:04.924967   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:05.021723   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:05.118353   68995 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:35:05.118439   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:05.618699   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:06.119206   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:06.618828   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:07.119022   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:07.619075   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:08.119512   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:08.618875   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:09.118565   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:09.619032   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:10.118774   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:10.618802   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:11.119561   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:11.618914   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:12.119049   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:12.619110   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:13.119295   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:13.619413   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:14.118909   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:14.619324   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:15.118964   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:15.618534   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:16.119209   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:16.619370   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:17.119366   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:17.619359   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:18.118510   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:18.618899   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:19.119102   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:19.619423   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:20.119089   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:20.618533   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:21.118809   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:21.618616   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:22.119363   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:22.618892   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:23.118639   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:23.619386   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:24.118543   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:24.618558   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:25.118612   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:25.619231   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:26.118601   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:26.618755   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:27.118683   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:27.618797   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:28.118832   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:28.618508   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:29.118591   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:29.618542   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:30.118979   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:30.619506   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:31.118721   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:31.619394   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:32.118495   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:32.618922   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:33.119055   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:33.619189   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:34.118644   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:34.618987   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:35.119225   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:35.619447   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:36.118778   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:36.619327   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:37.118504   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:37.619038   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:38.118793   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:38.619025   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:39.118668   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:39.619156   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:40.118595   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:40.618638   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:41.119418   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:41.618806   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:42.118791   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:42.619397   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:43.119214   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:43.619520   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:44.118776   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:44.618539   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:45.118556   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:45.619338   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:46.119336   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:46.618522   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:47.119180   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:47.618740   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:48.118755   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:48.618715   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:49.118497   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:49.619062   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:50.118758   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:50.618820   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:51.119238   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:51.618777   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:52.119537   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:52.618848   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:53.118706   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:53.618985   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:54.119456   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:54.618507   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:55.118511   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:55.618721   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:56.118586   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:56.618540   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:57.118852   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:57.619296   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:58.118571   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:58.619133   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:59.119529   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:59.618552   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:00.118563   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:00.618818   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:01.118545   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:01.619292   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:02.118552   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:02.618759   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:03.119280   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:03.619453   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:04.119391   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:04.619016   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:05.118566   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:05.118653   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:05.157126   68995 cri.go:89] found id: ""
	I0719 19:36:05.157154   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.157163   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:05.157169   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:05.157218   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:05.190115   68995 cri.go:89] found id: ""
	I0719 19:36:05.190143   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.190153   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:05.190161   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:05.190218   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:05.222515   68995 cri.go:89] found id: ""
	I0719 19:36:05.222542   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.222550   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:05.222559   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:05.222609   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:05.264077   68995 cri.go:89] found id: ""
	I0719 19:36:05.264096   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.264103   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:05.264108   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:05.264180   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:05.305227   68995 cri.go:89] found id: ""
	I0719 19:36:05.305260   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.305271   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:05.305278   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:05.305365   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:05.346841   68995 cri.go:89] found id: ""
	I0719 19:36:05.346870   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.346883   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:05.346893   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:05.346948   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:05.386787   68995 cri.go:89] found id: ""
	I0719 19:36:05.386816   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.386826   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:05.386832   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:05.386888   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:05.425126   68995 cri.go:89] found id: ""
	I0719 19:36:05.425151   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.425161   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:05.425172   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:05.425187   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:05.476260   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:05.476291   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:05.489614   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:05.489639   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:05.616060   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:05.616082   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:05.616095   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:05.680089   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:05.680121   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:08.219734   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:08.234371   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:08.234437   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:08.279970   68995 cri.go:89] found id: ""
	I0719 19:36:08.279998   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.280009   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:08.280016   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:08.280068   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:08.315997   68995 cri.go:89] found id: ""
	I0719 19:36:08.316026   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.316037   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:08.316044   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:08.316108   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:08.350044   68995 cri.go:89] found id: ""
	I0719 19:36:08.350073   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.350082   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:08.350089   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:08.350149   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:08.407460   68995 cri.go:89] found id: ""
	I0719 19:36:08.407482   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.407494   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:08.407501   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:08.407559   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:08.442445   68995 cri.go:89] found id: ""
	I0719 19:36:08.442476   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.442486   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:08.442493   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:08.442564   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:08.478149   68995 cri.go:89] found id: ""
	I0719 19:36:08.478178   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.478193   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:08.478200   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:08.478261   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:08.523968   68995 cri.go:89] found id: ""
	I0719 19:36:08.523996   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.524003   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:08.524009   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:08.524083   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:08.558285   68995 cri.go:89] found id: ""
	I0719 19:36:08.558312   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.558322   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:08.558332   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:08.558347   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:08.570518   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:08.570544   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:08.642837   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:08.642861   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:08.642874   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:08.716865   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:08.716905   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:08.760499   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:08.760530   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:11.312231   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:11.326565   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:11.326640   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:11.359787   68995 cri.go:89] found id: ""
	I0719 19:36:11.359813   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.359823   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:11.359829   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:11.359905   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:11.392541   68995 cri.go:89] found id: ""
	I0719 19:36:11.392565   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.392573   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:11.392578   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:11.392626   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:11.426313   68995 cri.go:89] found id: ""
	I0719 19:36:11.426340   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.426357   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:11.426363   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:11.426426   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:11.463138   68995 cri.go:89] found id: ""
	I0719 19:36:11.463164   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.463174   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:11.463181   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:11.463242   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:11.496506   68995 cri.go:89] found id: ""
	I0719 19:36:11.496531   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.496542   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:11.496549   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:11.496610   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:11.533917   68995 cri.go:89] found id: ""
	I0719 19:36:11.533947   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.533960   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:11.533969   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:11.534048   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:11.567245   68995 cri.go:89] found id: ""
	I0719 19:36:11.567268   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.567277   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:11.567283   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:11.567332   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:11.609340   68995 cri.go:89] found id: ""
	I0719 19:36:11.609365   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.609376   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:11.609386   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:11.609400   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:11.659918   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:11.659953   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:11.673338   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:11.673371   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:11.745829   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:11.745854   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:11.745868   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:11.816860   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:11.816897   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:14.357653   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:14.371820   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:14.371900   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:14.408447   68995 cri.go:89] found id: ""
	I0719 19:36:14.408478   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.408488   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:14.408512   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:14.408583   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:14.443253   68995 cri.go:89] found id: ""
	I0719 19:36:14.443283   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.443319   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:14.443329   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:14.443403   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:14.476705   68995 cri.go:89] found id: ""
	I0719 19:36:14.476730   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.476739   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:14.476746   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:14.476806   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:14.508251   68995 cri.go:89] found id: ""
	I0719 19:36:14.508276   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.508286   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:14.508294   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:14.508365   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:14.540822   68995 cri.go:89] found id: ""
	I0719 19:36:14.540850   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.540860   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:14.540868   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:14.540930   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:14.574264   68995 cri.go:89] found id: ""
	I0719 19:36:14.574294   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.574320   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:14.574329   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:14.574409   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:14.606520   68995 cri.go:89] found id: ""
	I0719 19:36:14.606553   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.606565   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:14.606573   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:14.606646   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:14.640693   68995 cri.go:89] found id: ""
	I0719 19:36:14.640722   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.640731   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:14.640743   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:14.640757   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:14.697534   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:14.697575   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:14.710323   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:14.710365   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:14.781478   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:14.781501   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:14.781516   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:14.887498   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:14.887535   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:17.427726   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:17.441950   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:17.442041   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:17.475370   68995 cri.go:89] found id: ""
	I0719 19:36:17.475398   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.475408   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:17.475415   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:17.475474   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:17.508258   68995 cri.go:89] found id: ""
	I0719 19:36:17.508289   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.508307   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:17.508315   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:17.508380   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:17.542625   68995 cri.go:89] found id: ""
	I0719 19:36:17.542656   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.542668   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:17.542676   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:17.542750   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:17.583168   68995 cri.go:89] found id: ""
	I0719 19:36:17.583199   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.583210   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:17.583218   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:17.583280   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:17.619339   68995 cri.go:89] found id: ""
	I0719 19:36:17.619369   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.619380   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:17.619386   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:17.619438   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:17.653194   68995 cri.go:89] found id: ""
	I0719 19:36:17.653219   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.653227   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:17.653233   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:17.653289   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:17.689086   68995 cri.go:89] found id: ""
	I0719 19:36:17.689115   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.689123   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:17.689129   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:17.689245   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:17.723079   68995 cri.go:89] found id: ""
	I0719 19:36:17.723108   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.723115   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:17.723124   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:17.723148   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:17.736805   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:17.736829   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:17.808197   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:17.808219   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:17.808251   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:17.893127   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:17.893173   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:17.936617   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:17.936653   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:20.489347   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:20.502806   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:20.502884   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:20.543791   68995 cri.go:89] found id: ""
	I0719 19:36:20.543822   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.543846   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:20.543855   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:20.543924   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:20.582638   68995 cri.go:89] found id: ""
	I0719 19:36:20.582666   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.582678   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:20.582685   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:20.582742   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:20.632150   68995 cri.go:89] found id: ""
	I0719 19:36:20.632179   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.632192   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:20.632200   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:20.632265   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:20.665934   68995 cri.go:89] found id: ""
	I0719 19:36:20.665963   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.665974   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:20.665981   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:20.666040   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:20.703637   68995 cri.go:89] found id: ""
	I0719 19:36:20.703668   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.703679   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:20.703686   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:20.703755   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:20.736467   68995 cri.go:89] found id: ""
	I0719 19:36:20.736496   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.736506   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:20.736514   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:20.736582   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:20.771313   68995 cri.go:89] found id: ""
	I0719 19:36:20.771337   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.771345   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:20.771351   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:20.771408   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:20.804915   68995 cri.go:89] found id: ""
	I0719 19:36:20.804938   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.804944   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:20.804953   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:20.804964   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:20.880189   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:20.880229   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:20.919653   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:20.919682   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:20.968588   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:20.968626   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:20.983781   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:20.983810   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:21.050261   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:23.550679   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:23.563483   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:23.563541   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:23.603919   68995 cri.go:89] found id: ""
	I0719 19:36:23.603950   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.603961   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:23.603969   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:23.604034   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:23.640354   68995 cri.go:89] found id: ""
	I0719 19:36:23.640377   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.640384   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:23.640390   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:23.640441   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:23.672609   68995 cri.go:89] found id: ""
	I0719 19:36:23.672644   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.672661   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:23.672668   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:23.672732   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:23.709547   68995 cri.go:89] found id: ""
	I0719 19:36:23.709577   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.709591   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:23.709598   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:23.709658   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:23.742474   68995 cri.go:89] found id: ""
	I0719 19:36:23.742498   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.742508   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:23.742514   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:23.742578   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:23.779339   68995 cri.go:89] found id: ""
	I0719 19:36:23.779372   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.779385   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:23.779393   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:23.779462   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:23.811984   68995 cri.go:89] found id: ""
	I0719 19:36:23.812010   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.812022   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:23.812031   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:23.812097   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:23.844692   68995 cri.go:89] found id: ""
	I0719 19:36:23.844716   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.844724   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:23.844733   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:23.844746   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:23.899688   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:23.899721   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:23.915771   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:23.915799   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:23.986509   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:23.986529   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:23.986543   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:24.065621   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:24.065653   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:26.602809   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:26.615541   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:26.615616   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:26.647648   68995 cri.go:89] found id: ""
	I0719 19:36:26.647671   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.647680   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:26.647685   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:26.647742   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:26.682383   68995 cri.go:89] found id: ""
	I0719 19:36:26.682409   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.682417   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:26.682423   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:26.682468   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:26.715374   68995 cri.go:89] found id: ""
	I0719 19:36:26.715398   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.715405   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:26.715411   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:26.715482   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:26.748636   68995 cri.go:89] found id: ""
	I0719 19:36:26.748663   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.748673   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:26.748680   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:26.748742   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:26.783186   68995 cri.go:89] found id: ""
	I0719 19:36:26.783218   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.783229   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:26.783236   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:26.783299   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:26.813344   68995 cri.go:89] found id: ""
	I0719 19:36:26.813370   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.813380   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:26.813386   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:26.813434   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:26.845054   68995 cri.go:89] found id: ""
	I0719 19:36:26.845083   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.845093   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:26.845100   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:26.845166   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:26.878589   68995 cri.go:89] found id: ""
	I0719 19:36:26.878614   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.878621   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:26.878629   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:26.878640   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:26.931251   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:26.931284   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:26.944193   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:26.944230   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:27.011803   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:27.011826   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:27.011846   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:27.085331   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:27.085364   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:29.625633   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:29.639941   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:29.640002   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:29.673642   68995 cri.go:89] found id: ""
	I0719 19:36:29.673669   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.673678   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:29.673683   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:29.673744   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:29.710482   68995 cri.go:89] found id: ""
	I0719 19:36:29.710507   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.710515   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:29.710521   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:29.710571   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:29.746719   68995 cri.go:89] found id: ""
	I0719 19:36:29.746744   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.746754   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:29.746761   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:29.746823   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:29.783230   68995 cri.go:89] found id: ""
	I0719 19:36:29.783256   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.783264   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:29.783269   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:29.783318   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:29.817944   68995 cri.go:89] found id: ""
	I0719 19:36:29.817971   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.817980   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:29.817986   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:29.818049   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:29.851277   68995 cri.go:89] found id: ""
	I0719 19:36:29.851305   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.851315   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:29.851328   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:29.851387   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:29.884476   68995 cri.go:89] found id: ""
	I0719 19:36:29.884502   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.884510   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:29.884516   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:29.884566   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:29.916564   68995 cri.go:89] found id: ""
	I0719 19:36:29.916592   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.916600   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:29.916608   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:29.916620   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:29.967970   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:29.968009   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:29.981767   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:29.981792   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:30.046914   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:30.046939   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:30.046953   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:30.125610   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:30.125644   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:32.661765   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:32.676274   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:32.676354   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:32.712439   68995 cri.go:89] found id: ""
	I0719 19:36:32.712460   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.712468   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:32.712474   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:32.712534   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:32.745933   68995 cri.go:89] found id: ""
	I0719 19:36:32.745956   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.745965   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:32.745972   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:32.746035   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:32.778900   68995 cri.go:89] found id: ""
	I0719 19:36:32.778920   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.778927   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:32.778932   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:32.778988   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:32.809927   68995 cri.go:89] found id: ""
	I0719 19:36:32.809956   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.809966   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:32.809979   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:32.810038   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:32.840388   68995 cri.go:89] found id: ""
	I0719 19:36:32.840409   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.840417   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:32.840422   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:32.840472   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:32.873390   68995 cri.go:89] found id: ""
	I0719 19:36:32.873419   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.873428   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:32.873436   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:32.873499   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:32.909278   68995 cri.go:89] found id: ""
	I0719 19:36:32.909304   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.909311   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:32.909317   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:32.909374   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:32.941310   68995 cri.go:89] found id: ""
	I0719 19:36:32.941339   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.941346   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:32.941358   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:32.941370   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:32.992848   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:32.992875   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:33.006272   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:33.006305   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:33.068323   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:33.068358   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:33.068372   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:33.148030   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:33.148065   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:35.684957   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:35.697753   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:35.697823   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:35.733470   68995 cri.go:89] found id: ""
	I0719 19:36:35.733492   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.733500   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:35.733508   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:35.733565   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:35.766009   68995 cri.go:89] found id: ""
	I0719 19:36:35.766029   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.766037   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:35.766047   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:35.766104   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:35.799205   68995 cri.go:89] found id: ""
	I0719 19:36:35.799228   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.799238   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:35.799245   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:35.799303   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:35.834102   68995 cri.go:89] found id: ""
	I0719 19:36:35.834129   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.834139   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:35.834147   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:35.834217   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:35.867545   68995 cri.go:89] found id: ""
	I0719 19:36:35.867578   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.867588   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:35.867597   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:35.867653   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:35.902804   68995 cri.go:89] found id: ""
	I0719 19:36:35.902833   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.902856   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:35.902862   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:35.902917   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:35.935798   68995 cri.go:89] found id: ""
	I0719 19:36:35.935826   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.935851   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:35.935858   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:35.935918   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:35.968992   68995 cri.go:89] found id: ""
	I0719 19:36:35.969026   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.969037   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:35.969049   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:35.969068   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:36.054017   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:36.054050   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:36.094850   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:36.094875   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:36.145996   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:36.146030   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:36.159520   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:36.159549   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:36.227943   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:38.728958   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:38.742662   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:38.742727   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:38.779815   68995 cri.go:89] found id: ""
	I0719 19:36:38.779856   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.779867   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:38.779874   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:38.779938   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:38.814393   68995 cri.go:89] found id: ""
	I0719 19:36:38.814418   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.814428   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:38.814435   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:38.814496   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:38.848480   68995 cri.go:89] found id: ""
	I0719 19:36:38.848501   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.848508   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:38.848513   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:38.848565   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:38.883737   68995 cri.go:89] found id: ""
	I0719 19:36:38.883787   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.883796   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:38.883802   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:38.883887   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:38.921331   68995 cri.go:89] found id: ""
	I0719 19:36:38.921365   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.921376   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:38.921384   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:38.921432   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:38.957590   68995 cri.go:89] found id: ""
	I0719 19:36:38.957617   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.957625   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:38.957630   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:38.957689   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:38.992056   68995 cri.go:89] found id: ""
	I0719 19:36:38.992081   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.992091   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:38.992099   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:38.992162   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:39.026137   68995 cri.go:89] found id: ""
	I0719 19:36:39.026165   68995 logs.go:276] 0 containers: []
	W0719 19:36:39.026173   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:39.026181   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:39.026191   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:39.104975   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:39.105016   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:39.149479   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:39.149511   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:39.200003   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:39.200033   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:39.213690   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:39.213720   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:39.283772   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:41.784814   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:41.798307   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:41.798388   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:41.835130   68995 cri.go:89] found id: ""
	I0719 19:36:41.835160   68995 logs.go:276] 0 containers: []
	W0719 19:36:41.835181   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:41.835190   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:41.835248   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:41.871655   68995 cri.go:89] found id: ""
	I0719 19:36:41.871686   68995 logs.go:276] 0 containers: []
	W0719 19:36:41.871696   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:41.871702   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:41.871748   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:41.905291   68995 cri.go:89] found id: ""
	I0719 19:36:41.905321   68995 logs.go:276] 0 containers: []
	W0719 19:36:41.905332   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:41.905338   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:41.905387   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:41.938786   68995 cri.go:89] found id: ""
	I0719 19:36:41.938820   68995 logs.go:276] 0 containers: []
	W0719 19:36:41.938831   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:41.938839   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:41.938900   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:41.973146   68995 cri.go:89] found id: ""
	I0719 19:36:41.973170   68995 logs.go:276] 0 containers: []
	W0719 19:36:41.973178   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:41.973183   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:41.973230   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:42.007654   68995 cri.go:89] found id: ""
	I0719 19:36:42.007692   68995 logs.go:276] 0 containers: []
	W0719 19:36:42.007703   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:42.007713   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:42.007771   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:42.043419   68995 cri.go:89] found id: ""
	I0719 19:36:42.043454   68995 logs.go:276] 0 containers: []
	W0719 19:36:42.043465   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:42.043472   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:42.043550   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:42.079058   68995 cri.go:89] found id: ""
	I0719 19:36:42.079082   68995 logs.go:276] 0 containers: []
	W0719 19:36:42.079093   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:42.079106   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:42.079119   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:42.133699   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:42.133731   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:42.149143   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:42.149178   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:42.219380   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:42.219476   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:42.219493   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:42.300052   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:42.300089   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:44.837865   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:44.851088   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:44.851160   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:44.887593   68995 cri.go:89] found id: ""
	I0719 19:36:44.887625   68995 logs.go:276] 0 containers: []
	W0719 19:36:44.887636   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:44.887644   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:44.887704   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:44.923487   68995 cri.go:89] found id: ""
	I0719 19:36:44.923518   68995 logs.go:276] 0 containers: []
	W0719 19:36:44.923528   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:44.923535   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:44.923588   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:44.958530   68995 cri.go:89] found id: ""
	I0719 19:36:44.958552   68995 logs.go:276] 0 containers: []
	W0719 19:36:44.958560   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:44.958565   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:44.958612   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:44.993148   68995 cri.go:89] found id: ""
	I0719 19:36:44.993174   68995 logs.go:276] 0 containers: []
	W0719 19:36:44.993184   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:44.993191   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:44.993261   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:45.024580   68995 cri.go:89] found id: ""
	I0719 19:36:45.024608   68995 logs.go:276] 0 containers: []
	W0719 19:36:45.024619   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:45.024627   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:45.024687   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:45.061929   68995 cri.go:89] found id: ""
	I0719 19:36:45.061954   68995 logs.go:276] 0 containers: []
	W0719 19:36:45.061965   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:45.061974   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:45.062029   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:45.093445   68995 cri.go:89] found id: ""
	I0719 19:36:45.093473   68995 logs.go:276] 0 containers: []
	W0719 19:36:45.093482   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:45.093490   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:45.093549   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:45.129804   68995 cri.go:89] found id: ""
	I0719 19:36:45.129838   68995 logs.go:276] 0 containers: []
	W0719 19:36:45.129848   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:45.129860   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:45.129874   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:45.184333   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:45.184373   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:45.197516   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:45.197545   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:45.267374   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:45.267394   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:45.267411   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:45.343359   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:45.343400   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:47.880247   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:47.892644   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:47.892716   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:47.924838   68995 cri.go:89] found id: ""
	I0719 19:36:47.924866   68995 logs.go:276] 0 containers: []
	W0719 19:36:47.924874   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:47.924880   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:47.924936   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:47.956920   68995 cri.go:89] found id: ""
	I0719 19:36:47.956946   68995 logs.go:276] 0 containers: []
	W0719 19:36:47.956954   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:47.956960   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:47.957021   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:47.992589   68995 cri.go:89] found id: ""
	I0719 19:36:47.992614   68995 logs.go:276] 0 containers: []
	W0719 19:36:47.992622   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:47.992627   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:47.992681   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:48.028222   68995 cri.go:89] found id: ""
	I0719 19:36:48.028251   68995 logs.go:276] 0 containers: []
	W0719 19:36:48.028262   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:48.028270   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:48.028343   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:48.062122   68995 cri.go:89] found id: ""
	I0719 19:36:48.062144   68995 logs.go:276] 0 containers: []
	W0719 19:36:48.062152   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:48.062157   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:48.062202   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:48.095189   68995 cri.go:89] found id: ""
	I0719 19:36:48.095212   68995 logs.go:276] 0 containers: []
	W0719 19:36:48.095220   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:48.095226   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:48.095273   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:48.127726   68995 cri.go:89] found id: ""
	I0719 19:36:48.127756   68995 logs.go:276] 0 containers: []
	W0719 19:36:48.127764   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:48.127769   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:48.127820   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:48.160445   68995 cri.go:89] found id: ""
	I0719 19:36:48.160473   68995 logs.go:276] 0 containers: []
	W0719 19:36:48.160481   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:48.160490   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:48.160501   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:48.214196   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:48.214228   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:48.227942   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:48.227967   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:48.302651   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:48.302671   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:48.302684   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:48.382283   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:48.382325   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:50.921030   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:50.934783   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:50.934859   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:50.970236   68995 cri.go:89] found id: ""
	I0719 19:36:50.970268   68995 logs.go:276] 0 containers: []
	W0719 19:36:50.970282   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:50.970290   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:50.970349   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:51.003636   68995 cri.go:89] found id: ""
	I0719 19:36:51.003662   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.003669   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:51.003675   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:51.003735   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:51.035390   68995 cri.go:89] found id: ""
	I0719 19:36:51.035417   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.035425   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:51.035432   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:51.035496   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:51.069118   68995 cri.go:89] found id: ""
	I0719 19:36:51.069140   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.069148   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:51.069154   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:51.069207   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:51.102433   68995 cri.go:89] found id: ""
	I0719 19:36:51.102465   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.102476   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:51.102483   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:51.102547   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:51.135711   68995 cri.go:89] found id: ""
	I0719 19:36:51.135743   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.135755   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:51.135764   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:51.135856   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:51.168041   68995 cri.go:89] found id: ""
	I0719 19:36:51.168073   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.168083   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:51.168091   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:51.168153   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:51.205637   68995 cri.go:89] found id: ""
	I0719 19:36:51.205667   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.205674   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:51.205683   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:51.205696   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:51.245168   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:51.245197   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:51.295810   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:51.295863   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:51.308791   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:51.308813   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:51.385214   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:51.385239   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:51.385254   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:53.959019   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:53.971044   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:53.971118   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:54.004971   68995 cri.go:89] found id: ""
	I0719 19:36:54.004996   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.005007   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:54.005014   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:54.005074   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:54.035794   68995 cri.go:89] found id: ""
	I0719 19:36:54.035820   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.035828   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:54.035849   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:54.035909   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:54.067783   68995 cri.go:89] found id: ""
	I0719 19:36:54.067811   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.067821   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:54.067828   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:54.067905   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:54.107403   68995 cri.go:89] found id: ""
	I0719 19:36:54.107424   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.107432   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:54.107437   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:54.107486   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:54.144411   68995 cri.go:89] found id: ""
	I0719 19:36:54.144434   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.144443   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:54.144448   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:54.144507   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:54.180416   68995 cri.go:89] found id: ""
	I0719 19:36:54.180444   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.180453   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:54.180459   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:54.180504   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:54.216444   68995 cri.go:89] found id: ""
	I0719 19:36:54.216476   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.216484   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:54.216489   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:54.216536   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:54.252652   68995 cri.go:89] found id: ""
	I0719 19:36:54.252672   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.252679   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:54.252686   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:54.252697   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:54.265888   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:54.265920   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:54.337271   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:54.337300   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:54.337316   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:54.414882   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:54.414916   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:54.452363   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:54.452387   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:57.004491   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:57.017581   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:57.017662   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:57.051318   68995 cri.go:89] found id: ""
	I0719 19:36:57.051347   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.051355   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:57.051362   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:57.051414   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:57.091361   68995 cri.go:89] found id: ""
	I0719 19:36:57.091386   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.091394   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:57.091399   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:57.091454   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:57.125970   68995 cri.go:89] found id: ""
	I0719 19:36:57.125998   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.126009   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:57.126016   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:57.126078   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:57.162767   68995 cri.go:89] found id: ""
	I0719 19:36:57.162794   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.162804   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:57.162811   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:57.162868   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:57.199712   68995 cri.go:89] found id: ""
	I0719 19:36:57.199742   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.199763   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:57.199770   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:57.199882   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:57.234782   68995 cri.go:89] found id: ""
	I0719 19:36:57.234812   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.234823   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:57.234830   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:57.234889   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:57.269789   68995 cri.go:89] found id: ""
	I0719 19:36:57.269817   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.269827   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:57.269835   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:57.269895   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:57.303323   68995 cri.go:89] found id: ""
	I0719 19:36:57.303350   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.303361   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:57.303371   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:57.303385   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:57.353434   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:57.353469   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:57.365982   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:57.366007   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:57.433767   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:57.433787   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:57.433801   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:57.514332   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:57.514372   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:00.053832   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:00.067665   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:00.067733   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:00.102502   68995 cri.go:89] found id: ""
	I0719 19:37:00.102529   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.102539   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:00.102547   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:00.102605   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:00.155051   68995 cri.go:89] found id: ""
	I0719 19:37:00.155088   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.155101   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:00.155109   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:00.155245   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:00.189002   68995 cri.go:89] found id: ""
	I0719 19:37:00.189032   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.189042   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:00.189049   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:00.189113   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:00.222057   68995 cri.go:89] found id: ""
	I0719 19:37:00.222083   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.222091   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:00.222097   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:00.222278   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:00.256815   68995 cri.go:89] found id: ""
	I0719 19:37:00.256839   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.256847   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:00.256853   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:00.256907   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:00.288710   68995 cri.go:89] found id: ""
	I0719 19:37:00.288742   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.288750   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:00.288756   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:00.288803   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:00.325994   68995 cri.go:89] found id: ""
	I0719 19:37:00.326019   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.326028   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:00.326034   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:00.326090   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:00.360401   68995 cri.go:89] found id: ""
	I0719 19:37:00.360434   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.360441   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:00.360450   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:00.360462   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:00.373448   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:00.373479   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:00.443796   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:00.443812   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:00.443825   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:00.529738   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:00.529777   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:00.571340   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:00.571367   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:03.121961   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:03.134075   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:03.134472   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:03.168000   68995 cri.go:89] found id: ""
	I0719 19:37:03.168033   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.168044   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:03.168053   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:03.168118   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:03.200331   68995 cri.go:89] found id: ""
	I0719 19:37:03.200362   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.200369   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:03.200376   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:03.200437   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:03.232472   68995 cri.go:89] found id: ""
	I0719 19:37:03.232500   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.232510   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:03.232517   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:03.232567   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:03.265173   68995 cri.go:89] found id: ""
	I0719 19:37:03.265200   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.265208   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:03.265216   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:03.265268   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:03.298983   68995 cri.go:89] found id: ""
	I0719 19:37:03.299011   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.299022   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:03.299029   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:03.299081   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:03.337990   68995 cri.go:89] found id: ""
	I0719 19:37:03.338024   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.338038   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:03.338046   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:03.338111   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:03.371374   68995 cri.go:89] found id: ""
	I0719 19:37:03.371402   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.371410   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:03.371415   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:03.371472   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:03.403890   68995 cri.go:89] found id: ""
	I0719 19:37:03.403915   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.403928   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:03.403939   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:03.403954   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:03.478826   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:03.478864   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:03.519208   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:03.519231   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:03.571603   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:03.571636   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:03.585154   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:03.585178   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:03.649156   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:06.149550   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:06.162598   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:06.162676   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:06.196635   68995 cri.go:89] found id: ""
	I0719 19:37:06.196681   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.196692   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:06.196701   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:06.196764   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:06.230855   68995 cri.go:89] found id: ""
	I0719 19:37:06.230882   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.230892   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:06.230899   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:06.230959   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:06.264715   68995 cri.go:89] found id: ""
	I0719 19:37:06.264754   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.264771   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:06.264779   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:06.264842   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:06.297725   68995 cri.go:89] found id: ""
	I0719 19:37:06.297751   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.297758   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:06.297765   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:06.297810   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:06.334943   68995 cri.go:89] found id: ""
	I0719 19:37:06.334970   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.334978   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:06.334984   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:06.335038   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:06.373681   68995 cri.go:89] found id: ""
	I0719 19:37:06.373716   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.373725   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:06.373730   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:06.373781   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:06.409528   68995 cri.go:89] found id: ""
	I0719 19:37:06.409554   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.409564   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:06.409609   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:06.409682   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:06.444636   68995 cri.go:89] found id: ""
	I0719 19:37:06.444671   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.444680   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:06.444715   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:06.444734   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:06.499678   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:06.499714   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:06.514790   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:06.514821   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:06.604576   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:06.604597   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:06.604615   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:06.690956   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:06.690989   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:09.228994   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:09.243444   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:09.243506   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:09.277208   68995 cri.go:89] found id: ""
	I0719 19:37:09.277229   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.277237   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:09.277247   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:09.277296   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:09.309077   68995 cri.go:89] found id: ""
	I0719 19:37:09.309102   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.309110   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:09.309115   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:09.309168   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:09.341212   68995 cri.go:89] found id: ""
	I0719 19:37:09.341247   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.341258   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:09.341265   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:09.341332   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:09.376373   68995 cri.go:89] found id: ""
	I0719 19:37:09.376400   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.376410   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:09.376416   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:09.376475   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:09.409175   68995 cri.go:89] found id: ""
	I0719 19:37:09.409204   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.409215   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:09.409222   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:09.409277   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:09.441481   68995 cri.go:89] found id: ""
	I0719 19:37:09.441509   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.441518   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:09.441526   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:09.441588   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:09.472907   68995 cri.go:89] found id: ""
	I0719 19:37:09.472935   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.472946   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:09.472954   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:09.473008   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:09.507776   68995 cri.go:89] found id: ""
	I0719 19:37:09.507801   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.507810   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:09.507818   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:09.507850   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:09.559873   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:09.559910   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:09.572650   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:09.572677   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:09.644352   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:09.644375   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:09.644387   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:09.721464   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:09.721504   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:12.264113   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:12.276702   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:12.276758   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:12.321359   68995 cri.go:89] found id: ""
	I0719 19:37:12.321386   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.321393   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:12.321400   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:12.321471   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:12.365402   68995 cri.go:89] found id: ""
	I0719 19:37:12.365433   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.365441   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:12.365447   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:12.365505   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:12.409431   68995 cri.go:89] found id: ""
	I0719 19:37:12.409459   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.409471   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:12.409478   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:12.409526   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:12.447628   68995 cri.go:89] found id: ""
	I0719 19:37:12.447656   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.447665   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:12.447671   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:12.447720   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:12.481760   68995 cri.go:89] found id: ""
	I0719 19:37:12.481796   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.481806   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:12.481813   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:12.481875   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:12.513430   68995 cri.go:89] found id: ""
	I0719 19:37:12.513460   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.513471   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:12.513477   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:12.513525   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:12.546140   68995 cri.go:89] found id: ""
	I0719 19:37:12.546171   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.546182   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:12.546190   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:12.546258   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:12.578741   68995 cri.go:89] found id: ""
	I0719 19:37:12.578764   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.578774   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:12.578785   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:12.578800   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:12.647584   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:12.647610   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:12.647625   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:12.721989   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:12.722027   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:12.761141   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:12.761168   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:12.811283   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:12.811334   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:15.327447   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:15.339987   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:15.340053   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:15.373692   68995 cri.go:89] found id: ""
	I0719 19:37:15.373720   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.373729   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:15.373737   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:15.373794   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:15.405801   68995 cri.go:89] found id: ""
	I0719 19:37:15.405831   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.405842   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:15.405849   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:15.405915   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:15.439395   68995 cri.go:89] found id: ""
	I0719 19:37:15.439424   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.439436   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:15.439444   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:15.439513   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:15.475944   68995 cri.go:89] found id: ""
	I0719 19:37:15.475967   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.475975   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:15.475981   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:15.476029   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:15.509593   68995 cri.go:89] found id: ""
	I0719 19:37:15.509613   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.509627   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:15.509634   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:15.509695   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:15.548840   68995 cri.go:89] found id: ""
	I0719 19:37:15.548871   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.548881   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:15.548887   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:15.548948   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:15.581096   68995 cri.go:89] found id: ""
	I0719 19:37:15.581139   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.581155   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:15.581163   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:15.581223   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:15.617428   68995 cri.go:89] found id: ""
	I0719 19:37:15.617458   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.617469   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:15.617479   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:15.617495   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:15.688196   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:15.688223   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:15.688239   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:15.770179   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:15.770216   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:15.807899   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:15.807924   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:15.858152   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:15.858186   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:18.370857   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:18.383710   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:18.383781   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:18.417000   68995 cri.go:89] found id: ""
	I0719 19:37:18.417032   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.417044   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:18.417051   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:18.417149   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:18.455852   68995 cri.go:89] found id: ""
	I0719 19:37:18.455884   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.455900   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:18.455907   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:18.455970   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:18.491458   68995 cri.go:89] found id: ""
	I0719 19:37:18.491490   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.491501   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:18.491508   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:18.491573   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:18.525125   68995 cri.go:89] found id: ""
	I0719 19:37:18.525151   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.525158   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:18.525163   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:18.525210   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:18.560527   68995 cri.go:89] found id: ""
	I0719 19:37:18.560555   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.560566   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:18.560574   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:18.560624   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:18.595328   68995 cri.go:89] found id: ""
	I0719 19:37:18.595365   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.595378   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:18.595388   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:18.595445   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:18.627669   68995 cri.go:89] found id: ""
	I0719 19:37:18.627697   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.627705   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:18.627711   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:18.627765   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:18.660140   68995 cri.go:89] found id: ""
	I0719 19:37:18.660168   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.660177   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:18.660191   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:18.660206   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:18.733599   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:18.733628   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:18.774893   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:18.774920   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:18.827635   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:18.827672   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:18.841519   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:18.841545   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:18.904744   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:21.405683   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:21.420145   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:21.420224   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:21.456434   68995 cri.go:89] found id: ""
	I0719 19:37:21.456465   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.456477   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:21.456485   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:21.456543   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:21.489839   68995 cri.go:89] found id: ""
	I0719 19:37:21.489867   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.489874   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:21.489893   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:21.489945   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:21.521492   68995 cri.go:89] found id: ""
	I0719 19:37:21.521523   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.521533   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:21.521540   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:21.521602   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:21.553974   68995 cri.go:89] found id: ""
	I0719 19:37:21.553998   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.554008   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:21.554015   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:21.554075   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:21.589000   68995 cri.go:89] found id: ""
	I0719 19:37:21.589030   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.589042   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:21.589050   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:21.589119   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:21.622617   68995 cri.go:89] found id: ""
	I0719 19:37:21.622640   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.622648   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:21.622653   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:21.622702   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:21.654149   68995 cri.go:89] found id: ""
	I0719 19:37:21.654171   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.654178   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:21.654183   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:21.654241   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:21.687587   68995 cri.go:89] found id: ""
	I0719 19:37:21.687616   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.687628   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:21.687640   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:21.687656   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:21.699883   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:21.699910   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:21.769804   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:21.769825   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:21.769839   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:21.857167   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:21.857205   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:21.896025   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:21.896050   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:24.444695   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:24.457239   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:24.457310   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:24.491968   68995 cri.go:89] found id: ""
	I0719 19:37:24.491990   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.491999   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:24.492003   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:24.492049   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:24.526039   68995 cri.go:89] found id: ""
	I0719 19:37:24.526061   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.526069   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:24.526074   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:24.526120   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:24.560153   68995 cri.go:89] found id: ""
	I0719 19:37:24.560184   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.560194   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:24.560201   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:24.560254   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:24.594331   68995 cri.go:89] found id: ""
	I0719 19:37:24.594359   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.594369   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:24.594375   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:24.594442   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:24.632389   68995 cri.go:89] found id: ""
	I0719 19:37:24.632416   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.632425   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:24.632430   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:24.632481   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:24.664448   68995 cri.go:89] found id: ""
	I0719 19:37:24.664473   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.664481   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:24.664487   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:24.664547   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:24.699510   68995 cri.go:89] found id: ""
	I0719 19:37:24.699533   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.699542   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:24.699547   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:24.699599   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:24.732624   68995 cri.go:89] found id: ""
	I0719 19:37:24.732646   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.732655   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:24.732665   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:24.732688   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:24.800136   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:24.800161   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:24.800176   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:24.883652   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:24.883696   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:24.922027   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:24.922052   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:24.975191   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:24.975235   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:27.489307   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:27.503441   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:27.503500   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:27.536099   68995 cri.go:89] found id: ""
	I0719 19:37:27.536126   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.536134   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:27.536140   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:27.536187   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:27.567032   68995 cri.go:89] found id: ""
	I0719 19:37:27.567060   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.567068   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:27.567073   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:27.567133   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:27.599396   68995 cri.go:89] found id: ""
	I0719 19:37:27.599425   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.599436   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:27.599444   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:27.599510   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:27.634718   68995 cri.go:89] found id: ""
	I0719 19:37:27.634746   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.634755   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:27.634760   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:27.634809   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:27.672731   68995 cri.go:89] found id: ""
	I0719 19:37:27.672766   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.672778   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:27.672787   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:27.672852   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:27.706705   68995 cri.go:89] found id: ""
	I0719 19:37:27.706734   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.706743   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:27.706751   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:27.706814   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:27.740520   68995 cri.go:89] found id: ""
	I0719 19:37:27.740548   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.740556   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:27.740561   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:27.740614   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:27.772629   68995 cri.go:89] found id: ""
	I0719 19:37:27.772653   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.772660   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:27.772668   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:27.772680   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:27.826404   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:27.826432   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:27.839299   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:27.839331   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:27.903888   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:27.903908   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:27.903920   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:27.981384   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:27.981418   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:30.519503   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:30.532781   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:30.532867   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:30.566867   68995 cri.go:89] found id: ""
	I0719 19:37:30.566900   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.566911   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:30.566918   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:30.566979   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:30.599130   68995 cri.go:89] found id: ""
	I0719 19:37:30.599160   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.599171   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:30.599178   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:30.599241   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:30.632636   68995 cri.go:89] found id: ""
	I0719 19:37:30.632672   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.632685   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:30.632694   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:30.632767   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:30.665793   68995 cri.go:89] found id: ""
	I0719 19:37:30.665822   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.665837   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:30.665845   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:30.665908   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:30.699002   68995 cri.go:89] found id: ""
	I0719 19:37:30.699027   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.699034   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:30.699039   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:30.699098   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:30.733654   68995 cri.go:89] found id: ""
	I0719 19:37:30.733680   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.733690   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:30.733698   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:30.733757   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:30.766219   68995 cri.go:89] found id: ""
	I0719 19:37:30.766250   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.766260   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:30.766269   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:30.766344   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:30.800884   68995 cri.go:89] found id: ""
	I0719 19:37:30.800912   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.800922   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:30.800931   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:30.800946   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:30.851079   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:30.851114   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:30.863829   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:30.863873   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:30.932194   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:30.932219   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:30.932231   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:31.012345   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:31.012377   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:33.549156   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:33.561253   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:33.561318   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:33.593443   68995 cri.go:89] found id: ""
	I0719 19:37:33.593469   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.593479   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:33.593487   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:33.593545   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:33.626350   68995 cri.go:89] found id: ""
	I0719 19:37:33.626371   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.626380   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:33.626385   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:33.626434   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:33.660751   68995 cri.go:89] found id: ""
	I0719 19:37:33.660789   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.660797   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:33.660803   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:33.660857   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:33.696342   68995 cri.go:89] found id: ""
	I0719 19:37:33.696380   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.696392   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:33.696399   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:33.696465   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:33.728815   68995 cri.go:89] found id: ""
	I0719 19:37:33.728841   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.728850   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:33.728860   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:33.728922   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:33.765055   68995 cri.go:89] found id: ""
	I0719 19:37:33.765088   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.765095   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:33.765101   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:33.765150   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:33.798600   68995 cri.go:89] found id: ""
	I0719 19:37:33.798627   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.798635   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:33.798640   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:33.798687   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:33.831037   68995 cri.go:89] found id: ""
	I0719 19:37:33.831065   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.831075   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:33.831085   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:33.831100   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:33.880454   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:33.880485   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:33.894966   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:33.894996   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:33.964170   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:33.964200   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:33.964215   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:34.046534   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:34.046565   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:36.587779   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:36.601141   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:36.601211   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:36.634499   68995 cri.go:89] found id: ""
	I0719 19:37:36.634527   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.634536   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:36.634542   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:36.634606   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:36.666890   68995 cri.go:89] found id: ""
	I0719 19:37:36.666917   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.666925   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:36.666931   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:36.666986   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:36.698970   68995 cri.go:89] found id: ""
	I0719 19:37:36.699001   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.699016   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:36.699023   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:36.699078   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:36.731334   68995 cri.go:89] found id: ""
	I0719 19:37:36.731368   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.731379   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:36.731386   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:36.731446   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:36.764358   68995 cri.go:89] found id: ""
	I0719 19:37:36.764382   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.764394   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:36.764400   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:36.764455   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:36.798707   68995 cri.go:89] found id: ""
	I0719 19:37:36.798734   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.798742   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:36.798747   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:36.798797   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:36.831226   68995 cri.go:89] found id: ""
	I0719 19:37:36.831255   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.831266   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:36.831273   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:36.831331   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:36.863480   68995 cri.go:89] found id: ""
	I0719 19:37:36.863506   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.863513   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:36.863522   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:36.863536   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:36.915869   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:36.915908   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:36.930750   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:36.930781   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:37.012674   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:37.012694   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:37.012708   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:37.091337   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:37.091375   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:39.630016   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:39.642979   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:39.643041   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:39.675984   68995 cri.go:89] found id: ""
	I0719 19:37:39.676013   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.676023   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:39.676031   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:39.676100   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:39.707035   68995 cri.go:89] found id: ""
	I0719 19:37:39.707061   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.707068   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:39.707073   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:39.707131   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:39.744471   68995 cri.go:89] found id: ""
	I0719 19:37:39.744496   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.744503   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:39.744508   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:39.744561   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:39.778135   68995 cri.go:89] found id: ""
	I0719 19:37:39.778162   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.778171   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:39.778179   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:39.778238   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:39.811690   68995 cri.go:89] found id: ""
	I0719 19:37:39.811717   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.811727   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:39.811735   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:39.811793   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:39.843697   68995 cri.go:89] found id: ""
	I0719 19:37:39.843728   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.843740   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:39.843751   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:39.843801   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:39.877406   68995 cri.go:89] found id: ""
	I0719 19:37:39.877436   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.877448   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:39.877455   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:39.877520   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:39.909759   68995 cri.go:89] found id: ""
	I0719 19:37:39.909783   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.909790   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:39.909799   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:39.909809   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:39.958364   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:39.958393   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:39.973472   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:39.973512   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:40.040299   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:40.040323   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:40.040338   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:40.117663   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:40.117702   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:42.657713   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:42.672030   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:42.672096   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:42.706163   68995 cri.go:89] found id: ""
	I0719 19:37:42.706193   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.706204   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:42.706212   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:42.706271   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:42.740026   68995 cri.go:89] found id: ""
	I0719 19:37:42.740053   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.740063   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:42.740070   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:42.740132   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:42.772413   68995 cri.go:89] found id: ""
	I0719 19:37:42.772436   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.772443   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:42.772448   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:42.772493   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:42.803898   68995 cri.go:89] found id: ""
	I0719 19:37:42.803925   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.803936   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:42.803943   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:42.804007   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:42.835062   68995 cri.go:89] found id: ""
	I0719 19:37:42.835091   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.835100   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:42.835108   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:42.835181   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:42.867382   68995 cri.go:89] found id: ""
	I0719 19:37:42.867411   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.867420   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:42.867427   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:42.867513   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:42.899461   68995 cri.go:89] found id: ""
	I0719 19:37:42.899483   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.899491   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:42.899496   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:42.899542   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:42.932789   68995 cri.go:89] found id: ""
	I0719 19:37:42.932818   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.932827   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:42.932837   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:42.932854   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:42.945144   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:42.945174   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:43.013369   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:43.013393   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:43.013408   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:43.093227   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:43.093263   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:43.134429   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:43.134454   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:45.684772   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:45.697903   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:45.697958   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:45.731984   68995 cri.go:89] found id: ""
	I0719 19:37:45.732007   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.732015   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:45.732021   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:45.732080   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:45.767631   68995 cri.go:89] found id: ""
	I0719 19:37:45.767660   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.767670   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:45.767678   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:45.767739   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:45.800822   68995 cri.go:89] found id: ""
	I0719 19:37:45.800849   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.800860   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:45.800871   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:45.800928   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:45.834313   68995 cri.go:89] found id: ""
	I0719 19:37:45.834345   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.834354   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:45.834360   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:45.834417   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:45.868659   68995 cri.go:89] found id: ""
	I0719 19:37:45.868690   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.868701   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:45.868709   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:45.868780   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:45.900779   68995 cri.go:89] found id: ""
	I0719 19:37:45.900806   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.900816   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:45.900824   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:45.900886   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:45.932767   68995 cri.go:89] found id: ""
	I0719 19:37:45.932796   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.932806   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:45.932811   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:45.932872   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:45.967785   68995 cri.go:89] found id: ""
	I0719 19:37:45.967814   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.967822   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:45.967839   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:45.967852   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:46.034598   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:46.034623   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:46.034642   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:46.124237   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:46.124286   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:46.164874   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:46.164904   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:46.216679   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:46.216713   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:48.730444   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:48.743209   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:48.743283   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:48.780940   68995 cri.go:89] found id: ""
	I0719 19:37:48.780974   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.780982   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:48.780987   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:48.781037   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:48.814186   68995 cri.go:89] found id: ""
	I0719 19:37:48.814213   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.814221   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:48.814230   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:48.814310   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:48.845013   68995 cri.go:89] found id: ""
	I0719 19:37:48.845042   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.845056   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:48.845065   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:48.845139   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:48.878777   68995 cri.go:89] found id: ""
	I0719 19:37:48.878799   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.878807   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:48.878812   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:48.878858   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:48.908926   68995 cri.go:89] found id: ""
	I0719 19:37:48.908954   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.908965   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:48.908972   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:48.909032   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:48.942442   68995 cri.go:89] found id: ""
	I0719 19:37:48.942473   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.942483   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:48.942491   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:48.942554   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:48.974213   68995 cri.go:89] found id: ""
	I0719 19:37:48.974240   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.974249   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:48.974254   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:48.974308   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:49.010041   68995 cri.go:89] found id: ""
	I0719 19:37:49.010067   68995 logs.go:276] 0 containers: []
	W0719 19:37:49.010076   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:49.010085   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:49.010098   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:49.060387   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:49.060419   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:49.073748   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:49.073773   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:49.139397   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:49.139422   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:49.139439   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:49.216748   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:49.216792   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:51.759657   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:51.771457   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:51.771538   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:51.804841   68995 cri.go:89] found id: ""
	I0719 19:37:51.804870   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.804881   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:51.804886   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:51.804937   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:51.838235   68995 cri.go:89] found id: ""
	I0719 19:37:51.838264   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.838274   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:51.838282   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:51.838363   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:51.885664   68995 cri.go:89] found id: ""
	I0719 19:37:51.885689   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.885699   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:51.885706   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:51.885769   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:51.917883   68995 cri.go:89] found id: ""
	I0719 19:37:51.917910   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.917920   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:51.917927   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:51.917990   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:51.949856   68995 cri.go:89] found id: ""
	I0719 19:37:51.949889   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.949898   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:51.949905   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:51.949965   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:51.984016   68995 cri.go:89] found id: ""
	I0719 19:37:51.984060   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.984072   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:51.984081   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:51.984134   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:52.020853   68995 cri.go:89] found id: ""
	I0719 19:37:52.020883   68995 logs.go:276] 0 containers: []
	W0719 19:37:52.020895   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:52.020903   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:52.020958   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:52.054285   68995 cri.go:89] found id: ""
	I0719 19:37:52.054310   68995 logs.go:276] 0 containers: []
	W0719 19:37:52.054321   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:52.054329   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:52.054341   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:52.091083   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:52.091108   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:52.143946   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:52.143981   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:52.158573   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:52.158600   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:52.232029   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:52.232052   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:52.232065   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:54.811805   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:54.825343   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:54.825404   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:54.859915   68995 cri.go:89] found id: ""
	I0719 19:37:54.859943   68995 logs.go:276] 0 containers: []
	W0719 19:37:54.859953   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:54.859961   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:54.860022   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:54.892199   68995 cri.go:89] found id: ""
	I0719 19:37:54.892234   68995 logs.go:276] 0 containers: []
	W0719 19:37:54.892245   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:54.892252   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:54.892316   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:54.925714   68995 cri.go:89] found id: ""
	I0719 19:37:54.925746   68995 logs.go:276] 0 containers: []
	W0719 19:37:54.925757   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:54.925765   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:54.925833   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:54.958677   68995 cri.go:89] found id: ""
	I0719 19:37:54.958717   68995 logs.go:276] 0 containers: []
	W0719 19:37:54.958729   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:54.958736   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:54.958803   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:54.994945   68995 cri.go:89] found id: ""
	I0719 19:37:54.994976   68995 logs.go:276] 0 containers: []
	W0719 19:37:54.994986   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:54.994994   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:54.995073   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:55.028149   68995 cri.go:89] found id: ""
	I0719 19:37:55.028175   68995 logs.go:276] 0 containers: []
	W0719 19:37:55.028185   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:55.028194   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:55.028255   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:55.062698   68995 cri.go:89] found id: ""
	I0719 19:37:55.062731   68995 logs.go:276] 0 containers: []
	W0719 19:37:55.062744   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:55.062753   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:55.062812   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:55.095864   68995 cri.go:89] found id: ""
	I0719 19:37:55.095892   68995 logs.go:276] 0 containers: []
	W0719 19:37:55.095903   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:55.095913   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:55.095928   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:55.152201   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:55.152235   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:55.165503   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:55.165528   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:55.232405   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:55.232429   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:55.232444   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:55.315444   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:55.315496   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:57.852984   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:57.865718   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:57.865784   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:57.898531   68995 cri.go:89] found id: ""
	I0719 19:37:57.898557   68995 logs.go:276] 0 containers: []
	W0719 19:37:57.898565   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:57.898570   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:57.898625   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:57.933746   68995 cri.go:89] found id: ""
	I0719 19:37:57.933774   68995 logs.go:276] 0 containers: []
	W0719 19:37:57.933784   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:57.933791   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:57.933851   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:57.974429   68995 cri.go:89] found id: ""
	I0719 19:37:57.974466   68995 logs.go:276] 0 containers: []
	W0719 19:37:57.974479   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:57.974488   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:57.974560   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:58.011209   68995 cri.go:89] found id: ""
	I0719 19:37:58.011233   68995 logs.go:276] 0 containers: []
	W0719 19:37:58.011243   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:58.011249   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:58.011313   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:58.049309   68995 cri.go:89] found id: ""
	I0719 19:37:58.049337   68995 logs.go:276] 0 containers: []
	W0719 19:37:58.049349   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:58.049357   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:58.049419   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:58.080873   68995 cri.go:89] found id: ""
	I0719 19:37:58.080902   68995 logs.go:276] 0 containers: []
	W0719 19:37:58.080913   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:58.080920   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:58.080979   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:58.113661   68995 cri.go:89] found id: ""
	I0719 19:37:58.113689   68995 logs.go:276] 0 containers: []
	W0719 19:37:58.113698   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:58.113706   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:58.113845   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:58.147431   68995 cri.go:89] found id: ""
	I0719 19:37:58.147459   68995 logs.go:276] 0 containers: []
	W0719 19:37:58.147469   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:58.147480   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:58.147495   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:58.160170   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:58.160195   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:58.222818   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:58.222858   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:58.222874   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:58.304895   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:58.304929   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:58.354462   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:58.354492   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:00.905865   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:00.918105   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:00.918180   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:00.952085   68995 cri.go:89] found id: ""
	I0719 19:38:00.952111   68995 logs.go:276] 0 containers: []
	W0719 19:38:00.952122   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:00.952130   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:00.952192   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:00.985966   68995 cri.go:89] found id: ""
	I0719 19:38:00.985990   68995 logs.go:276] 0 containers: []
	W0719 19:38:00.985999   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:00.986011   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:00.986059   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:01.021869   68995 cri.go:89] found id: ""
	I0719 19:38:01.021901   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.021910   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:01.021915   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:01.021961   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:01.055821   68995 cri.go:89] found id: ""
	I0719 19:38:01.055867   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.055878   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:01.055886   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:01.055950   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:01.090922   68995 cri.go:89] found id: ""
	I0719 19:38:01.090945   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.090952   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:01.090958   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:01.091007   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:01.122779   68995 cri.go:89] found id: ""
	I0719 19:38:01.122808   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.122824   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:01.122831   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:01.122896   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:01.159436   68995 cri.go:89] found id: ""
	I0719 19:38:01.159464   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.159473   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:01.159479   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:01.159531   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:01.191500   68995 cri.go:89] found id: ""
	I0719 19:38:01.191524   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.191532   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:01.191541   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:01.191551   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:01.227143   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:01.227173   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:01.278336   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:01.278363   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:01.291658   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:01.291687   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:01.357320   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:01.357346   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:01.357362   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:03.938679   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:03.951235   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:03.951295   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:03.987844   68995 cri.go:89] found id: ""
	I0719 19:38:03.987875   68995 logs.go:276] 0 containers: []
	W0719 19:38:03.987887   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:03.987894   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:03.987959   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:04.030628   68995 cri.go:89] found id: ""
	I0719 19:38:04.030654   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.030665   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:04.030672   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:04.030744   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:04.077913   68995 cri.go:89] found id: ""
	I0719 19:38:04.077942   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.077950   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:04.077955   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:04.078022   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:04.130065   68995 cri.go:89] found id: ""
	I0719 19:38:04.130098   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.130109   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:04.130116   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:04.130181   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:04.162061   68995 cri.go:89] found id: ""
	I0719 19:38:04.162089   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.162097   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:04.162103   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:04.162163   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:04.194526   68995 cri.go:89] found id: ""
	I0719 19:38:04.194553   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.194560   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:04.194566   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:04.194613   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:04.226387   68995 cri.go:89] found id: ""
	I0719 19:38:04.226418   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.226429   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:04.226436   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:04.226569   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:04.263346   68995 cri.go:89] found id: ""
	I0719 19:38:04.263375   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.263393   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:04.263404   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:04.263420   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:04.321110   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:04.321140   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:04.334294   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:04.334318   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:04.398556   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:04.398577   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:04.398592   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:04.485108   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:04.485139   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:07.022235   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:07.034595   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:07.034658   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:07.067454   68995 cri.go:89] found id: ""
	I0719 19:38:07.067482   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.067492   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:07.067499   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:07.067556   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:07.101154   68995 cri.go:89] found id: ""
	I0719 19:38:07.101183   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.101192   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:07.101206   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:07.101269   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:07.136084   68995 cri.go:89] found id: ""
	I0719 19:38:07.136110   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.136118   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:07.136123   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:07.136173   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:07.173278   68995 cri.go:89] found id: ""
	I0719 19:38:07.173304   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.173312   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:07.173317   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:07.173380   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:07.209680   68995 cri.go:89] found id: ""
	I0719 19:38:07.209707   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.209719   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:07.209727   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:07.209788   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:07.242425   68995 cri.go:89] found id: ""
	I0719 19:38:07.242453   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.242462   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:07.242471   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:07.242538   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:07.276112   68995 cri.go:89] found id: ""
	I0719 19:38:07.276137   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.276147   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:07.276153   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:07.276216   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:07.309460   68995 cri.go:89] found id: ""
	I0719 19:38:07.309489   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.309500   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:07.309514   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:07.309528   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:07.361611   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:07.361650   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:07.375905   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:07.375942   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:07.441390   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:07.441412   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:07.441426   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:07.522023   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:07.522068   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:10.061774   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:10.074620   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:10.074705   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:10.107260   68995 cri.go:89] found id: ""
	I0719 19:38:10.107284   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.107293   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:10.107298   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:10.107345   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:10.142704   68995 cri.go:89] found id: ""
	I0719 19:38:10.142731   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.142738   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:10.142744   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:10.142793   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:10.174713   68995 cri.go:89] found id: ""
	I0719 19:38:10.174751   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.174762   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:10.174770   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:10.174837   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:10.205687   68995 cri.go:89] found id: ""
	I0719 19:38:10.205711   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.205719   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:10.205724   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:10.205782   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:10.238754   68995 cri.go:89] found id: ""
	I0719 19:38:10.238781   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.238789   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:10.238795   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:10.238844   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:10.271239   68995 cri.go:89] found id: ""
	I0719 19:38:10.271264   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.271274   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:10.271281   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:10.271380   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:10.306944   68995 cri.go:89] found id: ""
	I0719 19:38:10.306968   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.306975   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:10.306980   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:10.307030   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:10.336681   68995 cri.go:89] found id: ""
	I0719 19:38:10.336708   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.336716   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:10.336725   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:10.336736   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:10.376368   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:10.376405   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:10.432676   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:10.432722   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:10.446932   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:10.446965   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:10.514285   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:10.514307   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:10.514335   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:13.097845   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:13.110781   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:13.110862   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:13.144732   68995 cri.go:89] found id: ""
	I0719 19:38:13.144760   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.144772   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:13.144779   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:13.144840   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:13.176313   68995 cri.go:89] found id: ""
	I0719 19:38:13.176356   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.176364   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:13.176372   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:13.176437   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:13.208219   68995 cri.go:89] found id: ""
	I0719 19:38:13.208249   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.208260   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:13.208267   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:13.208324   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:13.242977   68995 cri.go:89] found id: ""
	I0719 19:38:13.243004   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.243014   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:13.243022   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:13.243080   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:13.275356   68995 cri.go:89] found id: ""
	I0719 19:38:13.275382   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.275389   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:13.275396   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:13.275443   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:13.310244   68995 cri.go:89] found id: ""
	I0719 19:38:13.310275   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.310283   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:13.310290   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:13.310356   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:13.342626   68995 cri.go:89] found id: ""
	I0719 19:38:13.342652   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.342660   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:13.342667   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:13.342737   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:13.374943   68995 cri.go:89] found id: ""
	I0719 19:38:13.374965   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.374973   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:13.374983   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:13.374993   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:13.429940   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:13.429975   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:13.443636   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:13.443677   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:13.514439   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:13.514468   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:13.514481   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:13.595337   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:13.595371   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:16.132302   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:16.145088   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:16.145148   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:16.176994   68995 cri.go:89] found id: ""
	I0719 19:38:16.177023   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.177034   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:16.177042   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:16.177103   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:16.208730   68995 cri.go:89] found id: ""
	I0719 19:38:16.208760   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.208771   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:16.208779   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:16.208843   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:16.244708   68995 cri.go:89] found id: ""
	I0719 19:38:16.244736   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.244747   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:16.244755   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:16.244816   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:16.279211   68995 cri.go:89] found id: ""
	I0719 19:38:16.279237   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.279245   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:16.279250   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:16.279328   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:16.312856   68995 cri.go:89] found id: ""
	I0719 19:38:16.312891   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.312902   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:16.312912   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:16.312980   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:16.347598   68995 cri.go:89] found id: ""
	I0719 19:38:16.347623   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.347630   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:16.347635   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:16.347695   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:16.381927   68995 cri.go:89] found id: ""
	I0719 19:38:16.381957   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.381966   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:16.381973   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:16.382035   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:16.415020   68995 cri.go:89] found id: ""
	I0719 19:38:16.415053   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.415061   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:16.415071   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:16.415084   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:16.428268   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:16.428304   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:16.498869   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:16.498887   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:16.498902   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:16.581237   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:16.581271   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:16.616727   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:16.616755   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:19.172304   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:19.185206   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:19.185282   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:19.218156   68995 cri.go:89] found id: ""
	I0719 19:38:19.218189   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.218200   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:19.218207   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:19.218277   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:19.251173   68995 cri.go:89] found id: ""
	I0719 19:38:19.251201   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.251211   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:19.251219   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:19.251280   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:19.283537   68995 cri.go:89] found id: ""
	I0719 19:38:19.283566   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.283578   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:19.283586   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:19.283654   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:19.315995   68995 cri.go:89] found id: ""
	I0719 19:38:19.316026   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.316047   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:19.316054   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:19.316130   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:19.348917   68995 cri.go:89] found id: ""
	I0719 19:38:19.348940   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.348949   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:19.348954   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:19.349001   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:19.382174   68995 cri.go:89] found id: ""
	I0719 19:38:19.382206   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.382220   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:19.382228   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:19.382290   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:19.416178   68995 cri.go:89] found id: ""
	I0719 19:38:19.416208   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.416217   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:19.416222   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:19.416268   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:19.449333   68995 cri.go:89] found id: ""
	I0719 19:38:19.449364   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.449375   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:19.449385   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:19.449397   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:19.503673   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:19.503708   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:19.517815   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:19.517847   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:19.583811   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:19.583849   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:19.583865   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:19.658742   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:19.658780   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:22.197792   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:22.211464   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:22.211526   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:22.248860   68995 cri.go:89] found id: ""
	I0719 19:38:22.248891   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.248899   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:22.248905   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:22.248967   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:22.293173   68995 cri.go:89] found id: ""
	I0719 19:38:22.293198   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.293206   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:22.293212   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:22.293268   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:22.328826   68995 cri.go:89] found id: ""
	I0719 19:38:22.328857   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.328868   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:22.328875   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:22.328939   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:22.361560   68995 cri.go:89] found id: ""
	I0719 19:38:22.361590   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.361601   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:22.361608   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:22.361668   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:22.400899   68995 cri.go:89] found id: ""
	I0719 19:38:22.400927   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.400935   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:22.400941   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:22.400988   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:22.437159   68995 cri.go:89] found id: ""
	I0719 19:38:22.437185   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.437195   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:22.437202   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:22.437264   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:22.472424   68995 cri.go:89] found id: ""
	I0719 19:38:22.472454   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.472464   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:22.472472   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:22.472532   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:22.507649   68995 cri.go:89] found id: ""
	I0719 19:38:22.507678   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.507688   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:22.507704   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:22.507718   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:22.559616   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:22.559649   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:22.573436   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:22.573468   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:22.643245   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:22.643279   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:22.643319   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:22.721532   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:22.721571   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:25.264435   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:25.277324   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:25.277400   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:25.308328   68995 cri.go:89] found id: ""
	I0719 19:38:25.308359   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.308370   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:25.308377   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:25.308429   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:25.344100   68995 cri.go:89] found id: ""
	I0719 19:38:25.344130   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.344140   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:25.344148   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:25.344207   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:25.379393   68995 cri.go:89] found id: ""
	I0719 19:38:25.379430   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.379440   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:25.379447   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:25.379495   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:25.417344   68995 cri.go:89] found id: ""
	I0719 19:38:25.417369   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.417377   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:25.417383   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:25.417467   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:25.450753   68995 cri.go:89] found id: ""
	I0719 19:38:25.450778   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.450787   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:25.450793   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:25.450845   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:25.483479   68995 cri.go:89] found id: ""
	I0719 19:38:25.483503   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.483512   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:25.483520   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:25.483585   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:25.514954   68995 cri.go:89] found id: ""
	I0719 19:38:25.514978   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.514986   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:25.514992   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:25.515040   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:25.547802   68995 cri.go:89] found id: ""
	I0719 19:38:25.547828   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.547847   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:25.547858   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:25.547872   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:25.599643   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:25.599679   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:25.612587   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:25.612613   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:25.680267   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:25.680283   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:25.680294   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:25.764185   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:25.764215   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:28.309615   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:28.322513   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:28.322590   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:28.354692   68995 cri.go:89] found id: ""
	I0719 19:38:28.354728   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.354738   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:28.354746   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:28.354819   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:28.392144   68995 cri.go:89] found id: ""
	I0719 19:38:28.392170   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.392180   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:28.392187   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:28.392245   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:28.428196   68995 cri.go:89] found id: ""
	I0719 19:38:28.428226   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.428235   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:28.428243   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:28.428303   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:28.460353   68995 cri.go:89] found id: ""
	I0719 19:38:28.460385   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.460396   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:28.460403   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:28.460465   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:28.493143   68995 cri.go:89] found id: ""
	I0719 19:38:28.493168   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.493177   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:28.493184   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:28.493240   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:28.527379   68995 cri.go:89] found id: ""
	I0719 19:38:28.527404   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.527414   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:28.527422   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:28.527477   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:28.561506   68995 cri.go:89] found id: ""
	I0719 19:38:28.561531   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.561547   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:28.561555   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:28.561615   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:28.593832   68995 cri.go:89] found id: ""
	I0719 19:38:28.593859   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.593868   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:28.593878   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:28.593894   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:28.606980   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:28.607011   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:28.680957   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:28.680984   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:28.680997   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:28.755705   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:28.755738   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:28.793362   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:28.793397   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:31.343185   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:31.355903   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:31.355980   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:31.390158   68995 cri.go:89] found id: ""
	I0719 19:38:31.390188   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.390198   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:31.390203   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:31.390256   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:31.422743   68995 cri.go:89] found id: ""
	I0719 19:38:31.422777   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.422788   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:31.422796   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:31.422858   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:31.457886   68995 cri.go:89] found id: ""
	I0719 19:38:31.457912   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.457922   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:31.457927   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:31.457987   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:31.493625   68995 cri.go:89] found id: ""
	I0719 19:38:31.493648   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.493655   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:31.493670   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:31.493724   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:31.525764   68995 cri.go:89] found id: ""
	I0719 19:38:31.525789   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.525797   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:31.525803   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:31.525857   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:31.558453   68995 cri.go:89] found id: ""
	I0719 19:38:31.558478   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.558489   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:31.558499   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:31.558565   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:31.597875   68995 cri.go:89] found id: ""
	I0719 19:38:31.597901   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.597911   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:31.597918   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:31.597979   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:31.631688   68995 cri.go:89] found id: ""
	I0719 19:38:31.631712   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.631722   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:31.631730   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:31.631744   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:31.696991   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:31.697017   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:31.697032   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:31.782668   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:31.782712   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:31.822052   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:31.822091   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:31.876818   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:31.876857   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:34.390680   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:34.403662   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:34.403718   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:34.437627   68995 cri.go:89] found id: ""
	I0719 19:38:34.437653   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.437661   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:34.437668   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:34.437722   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:34.470906   68995 cri.go:89] found id: ""
	I0719 19:38:34.470935   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.470947   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:34.470955   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:34.471019   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:34.504424   68995 cri.go:89] found id: ""
	I0719 19:38:34.504452   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.504464   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:34.504472   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:34.504530   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:34.538610   68995 cri.go:89] found id: ""
	I0719 19:38:34.538638   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.538649   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:34.538657   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:34.538728   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:34.571473   68995 cri.go:89] found id: ""
	I0719 19:38:34.571500   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.571511   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:34.571518   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:34.571592   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:34.608465   68995 cri.go:89] found id: ""
	I0719 19:38:34.608499   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.608509   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:34.608516   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:34.608581   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:34.650110   68995 cri.go:89] found id: ""
	I0719 19:38:34.650140   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.650151   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:34.650158   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:34.650220   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:34.683386   68995 cri.go:89] found id: ""
	I0719 19:38:34.683409   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.683416   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:34.683427   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:34.683443   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:34.736264   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:34.736294   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:34.751293   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:34.751338   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:34.818817   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:34.818838   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:34.818854   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:34.899383   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:34.899419   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:37.438455   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:37.451494   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:37.451563   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:37.485908   68995 cri.go:89] found id: ""
	I0719 19:38:37.485934   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.485943   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:37.485950   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:37.486006   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:37.519156   68995 cri.go:89] found id: ""
	I0719 19:38:37.519183   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.519193   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:37.519200   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:37.519257   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:37.561498   68995 cri.go:89] found id: ""
	I0719 19:38:37.561530   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.561541   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:37.561548   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:37.561608   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:37.598971   68995 cri.go:89] found id: ""
	I0719 19:38:37.599002   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.599012   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:37.599019   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:37.599075   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:37.641257   68995 cri.go:89] found id: ""
	I0719 19:38:37.641304   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.641315   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:37.641323   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:37.641376   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:37.676572   68995 cri.go:89] found id: ""
	I0719 19:38:37.676596   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.676605   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:37.676610   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:37.676671   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:37.712105   68995 cri.go:89] found id: ""
	I0719 19:38:37.712131   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.712139   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:37.712145   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:37.712201   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:37.747850   68995 cri.go:89] found id: ""
	I0719 19:38:37.747879   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.747889   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:37.747899   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:37.747912   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:37.800904   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:37.800939   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:37.814264   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:37.814293   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:37.891872   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:37.891897   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:37.891913   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:37.967997   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:37.968033   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:40.511969   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:40.525191   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:40.525259   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:40.559619   68995 cri.go:89] found id: ""
	I0719 19:38:40.559651   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.559663   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:40.559671   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:40.559730   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:40.592775   68995 cri.go:89] found id: ""
	I0719 19:38:40.592804   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.592812   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:40.592817   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:40.592873   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:40.625524   68995 cri.go:89] found id: ""
	I0719 19:38:40.625546   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.625553   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:40.625558   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:40.625624   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:40.662117   68995 cri.go:89] found id: ""
	I0719 19:38:40.662151   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.662163   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:40.662171   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:40.662231   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:40.694835   68995 cri.go:89] found id: ""
	I0719 19:38:40.694868   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.694877   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:40.694885   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:40.694944   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:40.726879   68995 cri.go:89] found id: ""
	I0719 19:38:40.726905   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.726913   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:40.726919   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:40.726975   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:40.764663   68995 cri.go:89] found id: ""
	I0719 19:38:40.764688   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.764700   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:40.764707   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:40.764752   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:40.796684   68995 cri.go:89] found id: ""
	I0719 19:38:40.796711   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.796718   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:40.796726   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:40.796736   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:40.834549   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:40.834588   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:40.886521   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:40.886561   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:40.900162   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:40.900199   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:40.969263   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:40.969285   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:40.969319   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:43.545375   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:43.559091   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:43.559174   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:43.591901   68995 cri.go:89] found id: ""
	I0719 19:38:43.591929   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.591939   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:43.591946   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:43.592009   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:43.656005   68995 cri.go:89] found id: ""
	I0719 19:38:43.656035   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.656046   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:43.656055   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:43.656126   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:43.690049   68995 cri.go:89] found id: ""
	I0719 19:38:43.690073   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.690085   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:43.690092   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:43.690150   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:43.723449   68995 cri.go:89] found id: ""
	I0719 19:38:43.723475   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.723483   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:43.723490   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:43.723565   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:43.759683   68995 cri.go:89] found id: ""
	I0719 19:38:43.759710   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.759718   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:43.759724   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:43.759781   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:43.795127   68995 cri.go:89] found id: ""
	I0719 19:38:43.795154   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.795162   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:43.795179   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:43.795225   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:43.832493   68995 cri.go:89] found id: ""
	I0719 19:38:43.832522   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.832530   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:43.832536   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:43.832588   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:43.868651   68995 cri.go:89] found id: ""
	I0719 19:38:43.868686   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.868697   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:43.868709   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:43.868724   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:43.908054   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:43.908096   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:43.960508   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:43.960546   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:43.973793   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:43.973817   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:44.046185   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:44.046210   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:44.046224   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:46.628410   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:46.640786   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:46.640853   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:46.672921   68995 cri.go:89] found id: ""
	I0719 19:38:46.672950   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.672960   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:46.672967   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:46.673030   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:46.706817   68995 cri.go:89] found id: ""
	I0719 19:38:46.706846   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.706857   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:46.706864   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:46.706922   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:46.741130   68995 cri.go:89] found id: ""
	I0719 19:38:46.741151   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.741158   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:46.741163   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:46.741207   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:46.778639   68995 cri.go:89] found id: ""
	I0719 19:38:46.778664   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.778671   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:46.778677   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:46.778724   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:46.812605   68995 cri.go:89] found id: ""
	I0719 19:38:46.812632   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.812648   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:46.812656   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:46.812709   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:46.848459   68995 cri.go:89] found id: ""
	I0719 19:38:46.848490   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.848499   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:46.848506   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:46.848554   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:46.883444   68995 cri.go:89] found id: ""
	I0719 19:38:46.883475   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.883486   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:46.883494   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:46.883553   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:46.916932   68995 cri.go:89] found id: ""
	I0719 19:38:46.916959   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.916974   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:46.916983   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:46.916994   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:46.967243   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:46.967273   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:46.980786   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:46.980814   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:47.050672   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:47.050698   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:47.050713   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:47.136521   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:47.136557   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:49.673673   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:49.687071   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:49.687150   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:49.720268   68995 cri.go:89] found id: ""
	I0719 19:38:49.720296   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.720305   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:49.720313   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:49.720375   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:49.754485   68995 cri.go:89] found id: ""
	I0719 19:38:49.754512   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.754520   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:49.754527   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:49.754595   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:49.786533   68995 cri.go:89] found id: ""
	I0719 19:38:49.786557   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.786566   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:49.786573   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:49.786637   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:49.819719   68995 cri.go:89] found id: ""
	I0719 19:38:49.819749   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.819758   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:49.819764   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:49.819829   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:49.856826   68995 cri.go:89] found id: ""
	I0719 19:38:49.856859   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.856869   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:49.856876   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:49.856942   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:49.891737   68995 cri.go:89] found id: ""
	I0719 19:38:49.891763   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.891774   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:49.891781   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:49.891860   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:49.924660   68995 cri.go:89] found id: ""
	I0719 19:38:49.924690   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.924699   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:49.924704   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:49.924761   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:49.956896   68995 cri.go:89] found id: ""
	I0719 19:38:49.956925   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.956936   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:49.956946   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:49.956959   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:50.012836   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:50.012867   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:50.028269   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:50.028292   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:50.128352   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:50.128380   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:50.128392   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:50.205971   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:50.206004   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:52.742970   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:52.756222   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:52.756294   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:52.789556   68995 cri.go:89] found id: ""
	I0719 19:38:52.789585   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.789595   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:52.789601   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:52.789660   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:52.820745   68995 cri.go:89] found id: ""
	I0719 19:38:52.820790   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.820803   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:52.820811   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:52.820878   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:52.854667   68995 cri.go:89] found id: ""
	I0719 19:38:52.854698   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.854709   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:52.854716   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:52.854775   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:52.887125   68995 cri.go:89] found id: ""
	I0719 19:38:52.887152   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.887162   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:52.887167   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:52.887215   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:52.920824   68995 cri.go:89] found id: ""
	I0719 19:38:52.920861   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.920872   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:52.920878   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:52.920941   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:52.952819   68995 cri.go:89] found id: ""
	I0719 19:38:52.952848   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.952856   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:52.952861   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:52.952915   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:52.985508   68995 cri.go:89] found id: ""
	I0719 19:38:52.985535   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.985545   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:52.985553   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:52.985617   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:53.018633   68995 cri.go:89] found id: ""
	I0719 19:38:53.018660   68995 logs.go:276] 0 containers: []
	W0719 19:38:53.018669   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:53.018678   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:53.018690   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:53.071496   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:53.071533   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:53.084685   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:53.084709   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:53.146579   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:53.146603   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:53.146617   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:53.227521   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:53.227552   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:55.768276   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:55.782991   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:55.783075   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:55.825184   68995 cri.go:89] found id: ""
	I0719 19:38:55.825206   68995 logs.go:276] 0 containers: []
	W0719 19:38:55.825214   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:55.825220   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:55.825282   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:55.872997   68995 cri.go:89] found id: ""
	I0719 19:38:55.873024   68995 logs.go:276] 0 containers: []
	W0719 19:38:55.873033   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:55.873041   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:55.873106   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:55.911478   68995 cri.go:89] found id: ""
	I0719 19:38:55.911510   68995 logs.go:276] 0 containers: []
	W0719 19:38:55.911521   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:55.911530   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:55.911654   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:55.943930   68995 cri.go:89] found id: ""
	I0719 19:38:55.943960   68995 logs.go:276] 0 containers: []
	W0719 19:38:55.943971   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:55.943978   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:55.944046   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:55.977743   68995 cri.go:89] found id: ""
	I0719 19:38:55.977771   68995 logs.go:276] 0 containers: []
	W0719 19:38:55.977781   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:55.977788   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:55.977855   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:56.014436   68995 cri.go:89] found id: ""
	I0719 19:38:56.014467   68995 logs.go:276] 0 containers: []
	W0719 19:38:56.014479   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:56.014486   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:56.014561   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:56.047209   68995 cri.go:89] found id: ""
	I0719 19:38:56.047244   68995 logs.go:276] 0 containers: []
	W0719 19:38:56.047255   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:56.047263   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:56.047348   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:56.079632   68995 cri.go:89] found id: ""
	I0719 19:38:56.079655   68995 logs.go:276] 0 containers: []
	W0719 19:38:56.079663   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:56.079671   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:56.079685   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:56.153319   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:56.153343   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:56.153356   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:56.248187   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:56.248228   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:56.291329   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:56.291354   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:56.346462   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:56.346499   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:58.861739   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:58.874774   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:58.874842   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:58.913102   68995 cri.go:89] found id: ""
	I0719 19:38:58.913130   68995 logs.go:276] 0 containers: []
	W0719 19:38:58.913138   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:58.913143   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:58.913201   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:58.949367   68995 cri.go:89] found id: ""
	I0719 19:38:58.949397   68995 logs.go:276] 0 containers: []
	W0719 19:38:58.949407   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:58.949415   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:58.949483   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:58.984133   68995 cri.go:89] found id: ""
	I0719 19:38:58.984152   68995 logs.go:276] 0 containers: []
	W0719 19:38:58.984158   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:58.984164   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:58.984224   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:59.016356   68995 cri.go:89] found id: ""
	I0719 19:38:59.016383   68995 logs.go:276] 0 containers: []
	W0719 19:38:59.016393   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:59.016401   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:59.016469   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:59.049981   68995 cri.go:89] found id: ""
	I0719 19:38:59.050009   68995 logs.go:276] 0 containers: []
	W0719 19:38:59.050017   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:59.050027   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:59.050087   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:59.082248   68995 cri.go:89] found id: ""
	I0719 19:38:59.082270   68995 logs.go:276] 0 containers: []
	W0719 19:38:59.082278   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:59.082284   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:59.082358   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:59.115980   68995 cri.go:89] found id: ""
	I0719 19:38:59.116011   68995 logs.go:276] 0 containers: []
	W0719 19:38:59.116021   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:59.116028   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:59.116102   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:59.148379   68995 cri.go:89] found id: ""
	I0719 19:38:59.148404   68995 logs.go:276] 0 containers: []
	W0719 19:38:59.148412   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:59.148420   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:59.148432   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:59.161066   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:59.161099   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:59.225628   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:59.225658   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:59.225669   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:59.311640   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:59.311682   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:59.360964   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:59.360997   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:39:01.915091   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:39:01.928928   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:39:01.928987   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:39:01.962436   68995 cri.go:89] found id: ""
	I0719 19:39:01.962472   68995 logs.go:276] 0 containers: []
	W0719 19:39:01.962483   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:39:01.962490   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:39:01.962559   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:39:02.003045   68995 cri.go:89] found id: ""
	I0719 19:39:02.003078   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.003090   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:39:02.003098   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:39:02.003160   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:39:02.041835   68995 cri.go:89] found id: ""
	I0719 19:39:02.041863   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.041880   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:39:02.041888   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:39:02.041954   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:39:02.080285   68995 cri.go:89] found id: ""
	I0719 19:39:02.080340   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.080353   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:39:02.080361   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:39:02.080431   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:39:02.116225   68995 cri.go:89] found id: ""
	I0719 19:39:02.116308   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.116323   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:39:02.116331   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:39:02.116390   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:39:02.154818   68995 cri.go:89] found id: ""
	I0719 19:39:02.154848   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.154860   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:39:02.154868   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:39:02.154931   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:39:02.190689   68995 cri.go:89] found id: ""
	I0719 19:39:02.190715   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.190727   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:39:02.190741   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:39:02.190810   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:39:02.224155   68995 cri.go:89] found id: ""
	I0719 19:39:02.224185   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.224196   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:39:02.224207   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:39:02.224221   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:39:02.295623   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:39:02.295649   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:39:02.295665   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:39:02.378354   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:39:02.378390   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:39:02.416194   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:39:02.416222   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:39:02.470997   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:39:02.471039   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:39:04.985908   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:39:04.998485   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:39:04.998556   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:39:05.032776   68995 cri.go:89] found id: ""
	I0719 19:39:05.032801   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.032812   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:39:05.032819   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:39:05.032878   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:39:05.066650   68995 cri.go:89] found id: ""
	I0719 19:39:05.066677   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.066686   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:39:05.066693   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:39:05.066771   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:39:05.103689   68995 cri.go:89] found id: ""
	I0719 19:39:05.103712   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.103720   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:39:05.103728   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:39:05.103786   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:39:05.141739   68995 cri.go:89] found id: ""
	I0719 19:39:05.141767   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.141778   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:39:05.141785   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:39:05.141844   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:39:05.178535   68995 cri.go:89] found id: ""
	I0719 19:39:05.178566   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.178576   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:39:05.178582   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:39:05.178640   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:39:05.213876   68995 cri.go:89] found id: ""
	I0719 19:39:05.213904   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.213914   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:39:05.213921   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:39:05.213980   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:39:05.255039   68995 cri.go:89] found id: ""
	I0719 19:39:05.255063   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.255070   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:39:05.255076   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:39:05.255131   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:39:05.288508   68995 cri.go:89] found id: ""
	I0719 19:39:05.288536   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.288547   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:39:05.288558   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:39:05.288572   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:39:05.301079   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:39:05.301109   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:39:05.368109   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:39:05.368128   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:39:05.368140   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:39:05.453326   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:39:05.453360   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:39:05.494780   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:39:05.494814   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:39:08.056065   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:39:08.070370   68995 kubeadm.go:597] duration metric: took 4m4.520130366s to restartPrimaryControlPlane
	W0719 19:39:08.070441   68995 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 19:39:08.070473   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 19:39:12.404491   68995 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.333989512s)
	I0719 19:39:12.404566   68995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:39:12.418839   68995 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:39:12.430297   68995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:39:12.440268   68995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:39:12.440287   68995 kubeadm.go:157] found existing configuration files:
	
	I0719 19:39:12.440335   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:39:12.449116   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:39:12.449184   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:39:12.460572   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:39:12.471305   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:39:12.471362   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:39:12.482167   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:39:12.493823   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:39:12.493905   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:39:12.505544   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:39:12.517178   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:39:12.517240   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:39:12.529271   68995 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 19:39:12.620678   68995 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0719 19:39:12.620745   68995 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 19:39:12.758666   68995 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 19:39:12.758855   68995 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 19:39:12.759023   68995 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 19:39:12.939168   68995 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 19:39:12.941111   68995 out.go:204]   - Generating certificates and keys ...
	I0719 19:39:12.941226   68995 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 19:39:12.941319   68995 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 19:39:12.941423   68995 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 19:39:12.941537   68995 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 19:39:12.941651   68995 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 19:39:12.941754   68995 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 19:39:12.941851   68995 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 19:39:12.941935   68995 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 19:39:12.942048   68995 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 19:39:12.942156   68995 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 19:39:12.942238   68995 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 19:39:12.942336   68995 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 19:39:13.024175   68995 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 19:39:13.148395   68995 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 19:39:13.342824   68995 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 19:39:13.440755   68995 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 19:39:13.455869   68995 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 19:39:13.456830   68995 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 19:39:13.456907   68995 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 19:39:13.600063   68995 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 19:39:13.602239   68995 out.go:204]   - Booting up control plane ...
	I0719 19:39:13.602381   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 19:39:13.608875   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 19:39:13.609791   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 19:39:13.610664   68995 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 19:39:13.612681   68995 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 19:39:53.614108   68995 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0719 19:39:53.614700   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:39:53.614946   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:39:58.615471   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:39:58.615687   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:40:08.616294   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:40:08.616549   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:40:28.617451   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:40:28.617640   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:41:08.619807   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:41:08.620131   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:41:08.620145   68995 kubeadm.go:310] 
	I0719 19:41:08.620196   68995 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0719 19:41:08.620252   68995 kubeadm.go:310] 		timed out waiting for the condition
	I0719 19:41:08.620262   68995 kubeadm.go:310] 
	I0719 19:41:08.620302   68995 kubeadm.go:310] 	This error is likely caused by:
	I0719 19:41:08.620401   68995 kubeadm.go:310] 		- The kubelet is not running
	I0719 19:41:08.620615   68995 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0719 19:41:08.620632   68995 kubeadm.go:310] 
	I0719 19:41:08.620765   68995 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0719 19:41:08.620824   68995 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0719 19:41:08.620872   68995 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0719 19:41:08.620882   68995 kubeadm.go:310] 
	I0719 19:41:08.621033   68995 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0719 19:41:08.621168   68995 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0719 19:41:08.621179   68995 kubeadm.go:310] 
	I0719 19:41:08.621349   68995 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0719 19:41:08.621469   68995 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0719 19:41:08.621576   68995 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0719 19:41:08.621664   68995 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0719 19:41:08.621677   68995 kubeadm.go:310] 
	I0719 19:41:08.622038   68995 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 19:41:08.622184   68995 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0719 19:41:08.622310   68995 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0719 19:41:08.622452   68995 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0719 19:41:08.622521   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 19:41:09.080325   68995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:41:09.095325   68995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:41:09.104753   68995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:41:09.104771   68995 kubeadm.go:157] found existing configuration files:
	
	I0719 19:41:09.104818   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:41:09.113315   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:41:09.113375   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:41:09.122025   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:41:09.130333   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:41:09.130390   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:41:09.139159   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:41:09.147867   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:41:09.147913   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:41:09.156426   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:41:09.165249   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:41:09.165299   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:41:09.174271   68995 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 19:41:09.379598   68995 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 19:43:05.534759   68995 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0719 19:43:05.534861   68995 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0719 19:43:05.536588   68995 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0719 19:43:05.536645   68995 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 19:43:05.536728   68995 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 19:43:05.536857   68995 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 19:43:05.536952   68995 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 19:43:05.537043   68995 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 19:43:05.538861   68995 out.go:204]   - Generating certificates and keys ...
	I0719 19:43:05.538966   68995 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 19:43:05.539081   68995 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 19:43:05.539168   68995 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 19:43:05.539245   68995 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 19:43:05.539349   68995 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 19:43:05.539395   68995 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 19:43:05.539453   68995 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 19:43:05.539504   68995 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 19:43:05.539578   68995 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 19:43:05.539668   68995 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 19:43:05.539716   68995 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 19:43:05.539763   68995 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 19:43:05.539812   68995 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 19:43:05.539885   68995 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 19:43:05.539968   68995 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 19:43:05.540022   68995 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 19:43:05.540135   68995 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 19:43:05.540239   68995 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 19:43:05.540291   68995 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 19:43:05.540376   68995 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 19:43:05.542005   68995 out.go:204]   - Booting up control plane ...
	I0719 19:43:05.542089   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 19:43:05.542159   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 19:43:05.542219   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 19:43:05.542290   68995 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 19:43:05.542442   68995 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 19:43:05.542510   68995 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0719 19:43:05.542610   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:43:05.542826   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:43:05.542939   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:43:05.543171   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:43:05.543252   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:43:05.543470   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:43:05.543578   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:43:05.543759   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:43:05.543891   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:43:05.544130   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:43:05.544144   68995 kubeadm.go:310] 
	I0719 19:43:05.544185   68995 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0719 19:43:05.544237   68995 kubeadm.go:310] 		timed out waiting for the condition
	I0719 19:43:05.544252   68995 kubeadm.go:310] 
	I0719 19:43:05.544286   68995 kubeadm.go:310] 	This error is likely caused by:
	I0719 19:43:05.544316   68995 kubeadm.go:310] 		- The kubelet is not running
	I0719 19:43:05.544416   68995 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0719 19:43:05.544423   68995 kubeadm.go:310] 
	I0719 19:43:05.544574   68995 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0719 19:43:05.544628   68995 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0719 19:43:05.544675   68995 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0719 19:43:05.544683   68995 kubeadm.go:310] 
	I0719 19:43:05.544816   68995 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0719 19:43:05.544917   68995 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0719 19:43:05.544926   68995 kubeadm.go:310] 
	I0719 19:43:05.545025   68995 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0719 19:43:05.545099   68995 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0719 19:43:05.545183   68995 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0719 19:43:05.545275   68995 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0719 19:43:05.545312   68995 kubeadm.go:310] 
	I0719 19:43:05.545346   68995 kubeadm.go:394] duration metric: took 8m2.050412966s to StartCluster
	I0719 19:43:05.545390   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:43:05.545442   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:43:05.585066   68995 cri.go:89] found id: ""
	I0719 19:43:05.585101   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.585114   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:43:05.585123   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:43:05.585197   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:43:05.619270   68995 cri.go:89] found id: ""
	I0719 19:43:05.619298   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.619308   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:43:05.619316   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:43:05.619378   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:43:05.652886   68995 cri.go:89] found id: ""
	I0719 19:43:05.652924   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.652935   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:43:05.652943   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:43:05.653004   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:43:05.687430   68995 cri.go:89] found id: ""
	I0719 19:43:05.687464   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.687475   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:43:05.687482   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:43:05.687542   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:43:05.724176   68995 cri.go:89] found id: ""
	I0719 19:43:05.724207   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.724218   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:43:05.724226   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:43:05.724286   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:43:05.757047   68995 cri.go:89] found id: ""
	I0719 19:43:05.757070   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.757077   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:43:05.757083   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:43:05.757131   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:43:05.799705   68995 cri.go:89] found id: ""
	I0719 19:43:05.799733   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.799743   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:43:05.799751   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:43:05.799811   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:43:05.844622   68995 cri.go:89] found id: ""
	I0719 19:43:05.844647   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.844655   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:43:05.844665   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:43:05.844676   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:43:05.962843   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:43:05.962880   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:43:06.010157   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:43:06.010191   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:43:06.060738   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:43:06.060771   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:43:06.075609   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:43:06.075650   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:43:06.155814   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0719 19:43:06.155872   68995 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0719 19:43:06.155906   68995 out.go:239] * 
	* 
	W0719 19:43:06.155980   68995 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0719 19:43:06.156013   68995 out.go:239] * 
	* 
	W0719 19:43:06.156982   68995 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 19:43:06.160611   68995 out.go:177] 
	W0719 19:43:06.161797   68995 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0719 19:43:06.161854   68995 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0719 19:43:06.161873   68995 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0719 19:43:06.163354   68995 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-570369 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-570369 -n old-k8s-version-570369
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-570369 -n old-k8s-version-570369: exit status 2 (232.603488ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-570369 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-570369 logs -n 25: (1.530431409s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p kubernetes-upgrade-821162                           | kubernetes-upgrade-821162    | jenkins | v1.33.1 | 19 Jul 24 19:25 UTC | 19 Jul 24 19:25 UTC |
	| start   | -p embed-certs-889918                                  | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:25 UTC | 19 Jul 24 19:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| pause   | -p pause-666146                                        | pause-666146                 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| unpause | -p pause-666146                                        | pause-666146                 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| pause   | -p pause-666146                                        | pause-666146                 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-666146                                        | pause-666146                 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-666146                                        | pause-666146                 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	| delete  | -p                                                     | disable-driver-mounts-994300 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | disable-driver-mounts-994300                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:27 UTC |
	|         | default-k8s-diff-port-578533                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-971041             | no-preload-971041            | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-971041                                   | no-preload-971041            | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-578533  | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:27 UTC | 19 Jul 24 19:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:27 UTC |                     |
	|         | default-k8s-diff-port-578533                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-889918            | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:27 UTC | 19 Jul 24 19:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-889918                                  | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-971041                  | no-preload-971041            | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-971041 --memory=2200                     | no-preload-971041            | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC | 19 Jul 24 19:40 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-570369        | old-k8s-version-570369       | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-578533       | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-889918                 | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:30 UTC | 19 Jul 24 19:40 UTC |
	|         | default-k8s-diff-port-578533                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-889918                                  | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:30 UTC | 19 Jul 24 19:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-570369                              | old-k8s-version-570369       | jenkins | v1.33.1 | 19 Jul 24 19:30 UTC | 19 Jul 24 19:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-570369             | old-k8s-version-570369       | jenkins | v1.33.1 | 19 Jul 24 19:31 UTC | 19 Jul 24 19:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-570369                              | old-k8s-version-570369       | jenkins | v1.33.1 | 19 Jul 24 19:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 19:31:02
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 19:31:02.001058   68995 out.go:291] Setting OutFile to fd 1 ...
	I0719 19:31:02.001478   68995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:31:02.001493   68995 out.go:304] Setting ErrFile to fd 2...
	I0719 19:31:02.001500   68995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:31:02.001955   68995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 19:31:02.002951   68995 out.go:298] Setting JSON to false
	I0719 19:31:02.003871   68995 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8005,"bootTime":1721409457,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 19:31:02.003932   68995 start.go:139] virtualization: kvm guest
	I0719 19:31:02.005683   68995 out.go:177] * [old-k8s-version-570369] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 19:31:02.007426   68995 notify.go:220] Checking for updates...
	I0719 19:31:02.007440   68995 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 19:31:02.008887   68995 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 19:31:02.010101   68995 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:31:02.011382   68995 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 19:31:02.012573   68995 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 19:31:02.013796   68995 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 19:31:02.015353   68995 config.go:182] Loaded profile config "old-k8s-version-570369": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0719 19:31:02.015711   68995 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:31:02.015750   68995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:31:02.030334   68995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42673
	I0719 19:31:02.030703   68995 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:31:02.031167   68995 main.go:141] libmachine: Using API Version  1
	I0719 19:31:02.031204   68995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:31:02.031520   68995 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:31:02.031688   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:31:02.033460   68995 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0719 19:31:02.034684   68995 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 19:31:02.034983   68995 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:31:02.035018   68995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:31:02.049101   68995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44905
	I0719 19:31:02.049495   68995 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:31:02.049992   68995 main.go:141] libmachine: Using API Version  1
	I0719 19:31:02.050006   68995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:31:02.050295   68995 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:31:02.050462   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:31:02.085393   68995 out.go:177] * Using the kvm2 driver based on existing profile
	I0719 19:31:02.086642   68995 start.go:297] selected driver: kvm2
	I0719 19:31:02.086658   68995 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-570369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:31:02.086766   68995 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 19:31:02.087430   68995 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 19:31:02.087497   68995 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19307-14841/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 19:31:02.102357   68995 install.go:137] /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0719 19:31:02.102731   68995 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 19:31:02.102797   68995 cni.go:84] Creating CNI manager for ""
	I0719 19:31:02.102814   68995 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:31:02.102857   68995 start.go:340] cluster config:
	{Name:old-k8s-version-570369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570369 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:31:02.102975   68995 iso.go:125] acquiring lock: {Name:mka126a3710ab8a115eb2a3299b735524430e5f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 19:31:02.104839   68995 out.go:177] * Starting "old-k8s-version-570369" primary control-plane node in "old-k8s-version-570369" cluster
	I0719 19:30:58.496088   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:02.106080   68995 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 19:31:02.106120   68995 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0719 19:31:02.106127   68995 cache.go:56] Caching tarball of preloaded images
	I0719 19:31:02.106200   68995 preload.go:172] Found /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 19:31:02.106217   68995 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0719 19:31:02.106343   68995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/config.json ...
	I0719 19:31:02.106543   68995 start.go:360] acquireMachinesLock for old-k8s-version-570369: {Name:mk45aeee801587237bb965fa260501e9fac6f233 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 19:31:04.576109   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:07.648198   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:13.728139   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:16.800162   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:22.880170   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:25.952158   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:32.032094   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:35.104127   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:41.184155   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:44.256157   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:50.336135   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:53.408126   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:59.488236   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:02.560149   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:08.640107   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:11.712145   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:17.792126   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:20.864127   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:26.944130   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:30.016118   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:36.096094   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:39.168160   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:45.248117   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:48.320210   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:54.400118   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:57.472159   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:03.552119   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:06.624189   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:12.704169   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:15.776085   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:21.856121   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:24.928169   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:31.008143   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:34.080169   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:40.160126   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:43.232131   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:49.312133   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:52.384155   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:55.388766   68536 start.go:364] duration metric: took 3m51.88250847s to acquireMachinesLock for "default-k8s-diff-port-578533"
	I0719 19:33:55.388818   68536 start.go:96] Skipping create...Using existing machine configuration
	I0719 19:33:55.388827   68536 fix.go:54] fixHost starting: 
	I0719 19:33:55.389286   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:33:55.389320   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:33:55.405253   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44877
	I0719 19:33:55.405681   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:33:55.406171   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:33:55.406199   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:33:55.406526   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:33:55.406725   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:33:55.406877   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetState
	I0719 19:33:55.408620   68536 fix.go:112] recreateIfNeeded on default-k8s-diff-port-578533: state=Stopped err=<nil>
	I0719 19:33:55.408664   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	W0719 19:33:55.408859   68536 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 19:33:55.410653   68536 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-578533" ...
	I0719 19:33:55.386190   68019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:33:55.386229   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetMachineName
	I0719 19:33:55.386570   68019 buildroot.go:166] provisioning hostname "no-preload-971041"
	I0719 19:33:55.386604   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetMachineName
	I0719 19:33:55.386798   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:33:55.388625   68019 machine.go:97] duration metric: took 4m37.402512771s to provisionDockerMachine
	I0719 19:33:55.388679   68019 fix.go:56] duration metric: took 4m37.424232342s for fixHost
	I0719 19:33:55.388689   68019 start.go:83] releasing machines lock for "no-preload-971041", held for 4m37.424257446s
	W0719 19:33:55.388716   68019 start.go:714] error starting host: provision: host is not running
	W0719 19:33:55.388829   68019 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0719 19:33:55.388840   68019 start.go:729] Will try again in 5 seconds ...
	I0719 19:33:55.412077   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Start
	I0719 19:33:55.412257   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Ensuring networks are active...
	I0719 19:33:55.413065   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Ensuring network default is active
	I0719 19:33:55.413504   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Ensuring network mk-default-k8s-diff-port-578533 is active
	I0719 19:33:55.413931   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Getting domain xml...
	I0719 19:33:55.414619   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Creating domain...
	I0719 19:33:56.637676   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting to get IP...
	I0719 19:33:56.638545   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:56.638912   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:56.638971   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:56.638901   69604 retry.go:31] will retry after 210.865266ms: waiting for machine to come up
	I0719 19:33:56.851496   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:56.851959   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:56.851986   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:56.851920   69604 retry.go:31] will retry after 336.006729ms: waiting for machine to come up
	I0719 19:33:57.189571   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:57.189916   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:57.189945   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:57.189885   69604 retry.go:31] will retry after 325.384423ms: waiting for machine to come up
	I0719 19:33:57.516504   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:57.517043   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:57.517072   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:57.516989   69604 retry.go:31] will retry after 438.127108ms: waiting for machine to come up
	I0719 19:33:57.956641   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:57.957091   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:57.957110   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:57.957066   69604 retry.go:31] will retry after 656.518154ms: waiting for machine to come up
	I0719 19:34:00.391238   68019 start.go:360] acquireMachinesLock for no-preload-971041: {Name:mk45aeee801587237bb965fa260501e9fac6f233 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 19:33:58.614988   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:58.615352   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:58.615398   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:58.615292   69604 retry.go:31] will retry after 854.967551ms: waiting for machine to come up
	I0719 19:33:59.471470   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:59.471922   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:59.471950   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:59.471872   69604 retry.go:31] will retry after 1.032891444s: waiting for machine to come up
	I0719 19:34:00.506554   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:00.507020   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:34:00.507051   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:34:00.506976   69604 retry.go:31] will retry after 1.30559154s: waiting for machine to come up
	I0719 19:34:01.814516   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:01.814877   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:34:01.814903   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:34:01.814817   69604 retry.go:31] will retry after 1.152581751s: waiting for machine to come up
	I0719 19:34:02.968913   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:02.969375   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:34:02.969399   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:34:02.969317   69604 retry.go:31] will retry after 1.415187183s: waiting for machine to come up
	I0719 19:34:04.386365   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:04.386906   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:34:04.386937   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:34:04.386839   69604 retry.go:31] will retry after 2.854689482s: waiting for machine to come up
	I0719 19:34:07.243500   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:07.244001   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:34:07.244031   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:34:07.243865   69604 retry.go:31] will retry after 2.536982272s: waiting for machine to come up
	I0719 19:34:09.783749   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:09.784224   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:34:09.784256   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:34:09.784163   69604 retry.go:31] will retry after 3.046898328s: waiting for machine to come up
	I0719 19:34:12.833907   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.834440   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Found IP for machine: 192.168.72.104
	I0719 19:34:12.834468   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Reserving static IP address...
	I0719 19:34:12.834484   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has current primary IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.835035   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-578533", mac: "52:54:00:85:b3:02", ip: "192.168.72.104"} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:12.835064   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | skip adding static IP to network mk-default-k8s-diff-port-578533 - found existing host DHCP lease matching {name: "default-k8s-diff-port-578533", mac: "52:54:00:85:b3:02", ip: "192.168.72.104"}
	I0719 19:34:12.835078   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Reserved static IP address: 192.168.72.104
	I0719 19:34:12.835094   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for SSH to be available...
	I0719 19:34:12.835109   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Getting to WaitForSSH function...
	I0719 19:34:12.837426   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.837740   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:12.837766   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.837864   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Using SSH client type: external
	I0719 19:34:12.837885   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Using SSH private key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa (-rw-------)
	I0719 19:34:12.837916   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 19:34:12.837930   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | About to run SSH command:
	I0719 19:34:12.837946   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | exit 0
	I0719 19:34:12.959728   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | SSH cmd err, output: <nil>: 
	I0719 19:34:12.960118   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetConfigRaw
	I0719 19:34:12.960737   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetIP
	I0719 19:34:12.963250   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.963645   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:12.963675   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.963868   68536 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/config.json ...
	I0719 19:34:12.964074   68536 machine.go:94] provisionDockerMachine start ...
	I0719 19:34:12.964092   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:34:12.964326   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:12.966667   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.966982   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:12.967017   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.967148   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:12.967318   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:12.967567   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:12.967727   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:12.967907   68536 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:12.968155   68536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0719 19:34:12.968169   68536 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 19:34:13.063641   68536 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 19:34:13.063685   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetMachineName
	I0719 19:34:13.064015   68536 buildroot.go:166] provisioning hostname "default-k8s-diff-port-578533"
	I0719 19:34:13.064045   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetMachineName
	I0719 19:34:13.064242   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.067158   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.067658   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.067783   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.067785   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.067994   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.068150   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.068309   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.068438   68536 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:13.068638   68536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0719 19:34:13.068653   68536 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-578533 && echo "default-k8s-diff-port-578533" | sudo tee /etc/hostname
	I0719 19:34:13.182279   68536 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-578533
	
	I0719 19:34:13.182304   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.185081   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.185417   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.185440   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.185585   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.185761   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.185925   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.186102   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.186237   68536 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:13.186400   68536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0719 19:34:13.186415   68536 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-578533' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-578533/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-578533' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 19:34:13.292203   68536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:34:13.292246   68536 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 19:34:13.292276   68536 buildroot.go:174] setting up certificates
	I0719 19:34:13.292285   68536 provision.go:84] configureAuth start
	I0719 19:34:13.292315   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetMachineName
	I0719 19:34:13.292630   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetIP
	I0719 19:34:13.295150   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.295466   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.295494   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.295628   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.297608   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.297896   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.297924   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.298039   68536 provision.go:143] copyHostCerts
	I0719 19:34:13.298129   68536 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 19:34:13.298148   68536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 19:34:13.298233   68536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 19:34:13.298352   68536 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 19:34:13.298364   68536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 19:34:13.298405   68536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 19:34:13.298491   68536 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 19:34:13.298503   68536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 19:34:13.298530   68536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 19:34:13.298587   68536 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-578533 san=[127.0.0.1 192.168.72.104 default-k8s-diff-port-578533 localhost minikube]
	I0719 19:34:13.367868   68536 provision.go:177] copyRemoteCerts
	I0719 19:34:13.367923   68536 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 19:34:13.367947   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.370508   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.370761   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.370783   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.370975   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.371154   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.371331   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.371456   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:34:13.984609   68617 start.go:364] duration metric: took 4m7.413784663s to acquireMachinesLock for "embed-certs-889918"
	I0719 19:34:13.984726   68617 start.go:96] Skipping create...Using existing machine configuration
	I0719 19:34:13.984746   68617 fix.go:54] fixHost starting: 
	I0719 19:34:13.985263   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:13.985332   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:14.001938   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42041
	I0719 19:34:14.002372   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:14.002864   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:14.002890   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:14.003223   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:14.003400   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:14.003545   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetState
	I0719 19:34:14.005102   68617 fix.go:112] recreateIfNeeded on embed-certs-889918: state=Stopped err=<nil>
	I0719 19:34:14.005141   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	W0719 19:34:14.005320   68617 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 19:34:14.007389   68617 out.go:177] * Restarting existing kvm2 VM for "embed-certs-889918" ...
	I0719 19:34:13.449436   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 19:34:13.472085   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0719 19:34:13.493354   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 19:34:13.515145   68536 provision.go:87] duration metric: took 222.847636ms to configureAuth
	I0719 19:34:13.515179   68536 buildroot.go:189] setting minikube options for container-runtime
	I0719 19:34:13.515350   68536 config.go:182] Loaded profile config "default-k8s-diff-port-578533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:34:13.515430   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.517974   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.518417   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.518449   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.518608   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.518818   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.519077   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.519266   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.519452   68536 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:13.519605   68536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0719 19:34:13.519619   68536 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 19:34:13.764107   68536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 19:34:13.764143   68536 machine.go:97] duration metric: took 800.055026ms to provisionDockerMachine
	I0719 19:34:13.764155   68536 start.go:293] postStartSetup for "default-k8s-diff-port-578533" (driver="kvm2")
	I0719 19:34:13.764165   68536 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 19:34:13.764182   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:34:13.764505   68536 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 19:34:13.764535   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.767239   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.767669   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.767692   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.767870   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.768050   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.768210   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.768356   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:34:13.846203   68536 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 19:34:13.850376   68536 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 19:34:13.850403   68536 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 19:34:13.850473   68536 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 19:34:13.850562   68536 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 19:34:13.850672   68536 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 19:34:13.859405   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:34:13.880565   68536 start.go:296] duration metric: took 116.397021ms for postStartSetup
	I0719 19:34:13.880605   68536 fix.go:56] duration metric: took 18.491777449s for fixHost
	I0719 19:34:13.880628   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.883335   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.883686   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.883716   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.883906   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.884071   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.884245   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.884390   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.884526   68536 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:13.884730   68536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0719 19:34:13.884743   68536 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 19:34:13.984426   68536 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721417653.961477719
	
	I0719 19:34:13.984454   68536 fix.go:216] guest clock: 1721417653.961477719
	I0719 19:34:13.984464   68536 fix.go:229] Guest: 2024-07-19 19:34:13.961477719 +0000 UTC Remote: 2024-07-19 19:34:13.88060979 +0000 UTC m=+250.513369231 (delta=80.867929ms)
	I0719 19:34:13.984498   68536 fix.go:200] guest clock delta is within tolerance: 80.867929ms
	I0719 19:34:13.984503   68536 start.go:83] releasing machines lock for "default-k8s-diff-port-578533", held for 18.59570944s
	I0719 19:34:13.984529   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:34:13.984822   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetIP
	I0719 19:34:13.987630   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.988146   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.988165   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.988322   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:34:13.988760   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:34:13.988961   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:34:13.989046   68536 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 19:34:13.989094   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.989197   68536 ssh_runner.go:195] Run: cat /version.json
	I0719 19:34:13.989218   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.991820   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.992135   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.992285   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.992329   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.992419   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.992558   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.992584   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.992608   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.992695   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.992798   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.992815   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.992957   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:34:13.992978   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.993119   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:34:14.102452   68536 ssh_runner.go:195] Run: systemctl --version
	I0719 19:34:14.108454   68536 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 19:34:14.256762   68536 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 19:34:14.263615   68536 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 19:34:14.263685   68536 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 19:34:14.280375   68536 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 19:34:14.280396   68536 start.go:495] detecting cgroup driver to use...
	I0719 19:34:14.280465   68536 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 19:34:14.297524   68536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 19:34:14.312273   68536 docker.go:217] disabling cri-docker service (if available) ...
	I0719 19:34:14.312334   68536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 19:34:14.326334   68536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 19:34:14.341338   68536 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 19:34:14.465631   68536 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 19:34:14.603249   68536 docker.go:233] disabling docker service ...
	I0719 19:34:14.603333   68536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 19:34:14.617420   68536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 19:34:14.630612   68536 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 19:34:14.776325   68536 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 19:34:14.890106   68536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 19:34:14.904436   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 19:34:14.921845   68536 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 19:34:14.921921   68536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:14.932155   68536 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 19:34:14.932220   68536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:14.942999   68536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:14.955449   68536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:14.966646   68536 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 19:34:14.977147   68536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:14.988224   68536 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:15.005941   68536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:15.018341   68536 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 19:34:15.028418   68536 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 19:34:15.028502   68536 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 19:34:15.040704   68536 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 19:34:15.050734   68536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:34:15.159960   68536 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 19:34:15.323020   68536 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 19:34:15.323095   68536 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 19:34:15.327549   68536 start.go:563] Will wait 60s for crictl version
	I0719 19:34:15.327616   68536 ssh_runner.go:195] Run: which crictl
	I0719 19:34:15.330907   68536 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 19:34:15.369980   68536 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 19:34:15.370062   68536 ssh_runner.go:195] Run: crio --version
	I0719 19:34:15.399065   68536 ssh_runner.go:195] Run: crio --version
	I0719 19:34:15.426462   68536 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 19:34:14.008924   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Start
	I0719 19:34:14.009109   68617 main.go:141] libmachine: (embed-certs-889918) Ensuring networks are active...
	I0719 19:34:14.009827   68617 main.go:141] libmachine: (embed-certs-889918) Ensuring network default is active
	I0719 19:34:14.010200   68617 main.go:141] libmachine: (embed-certs-889918) Ensuring network mk-embed-certs-889918 is active
	I0719 19:34:14.010575   68617 main.go:141] libmachine: (embed-certs-889918) Getting domain xml...
	I0719 19:34:14.011168   68617 main.go:141] libmachine: (embed-certs-889918) Creating domain...
	I0719 19:34:15.280485   68617 main.go:141] libmachine: (embed-certs-889918) Waiting to get IP...
	I0719 19:34:15.281463   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:15.281945   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:15.282019   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:15.281911   69727 retry.go:31] will retry after 309.614436ms: waiting for machine to come up
	I0719 19:34:15.593592   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:15.594042   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:15.594082   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:15.593990   69727 retry.go:31] will retry after 302.723172ms: waiting for machine to come up
	I0719 19:34:15.898674   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:15.899173   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:15.899203   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:15.899125   69727 retry.go:31] will retry after 419.891108ms: waiting for machine to come up
	I0719 19:34:16.320907   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:16.321399   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:16.321444   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:16.321365   69727 retry.go:31] will retry after 528.113112ms: waiting for machine to come up
	I0719 19:34:15.427710   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetIP
	I0719 19:34:15.430887   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:15.431461   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:15.431493   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:15.431702   68536 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0719 19:34:15.435670   68536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:34:15.447909   68536 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-578533 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-578533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.104 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 19:34:15.448033   68536 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 19:34:15.448080   68536 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:34:15.483687   68536 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0719 19:34:15.483774   68536 ssh_runner.go:195] Run: which lz4
	I0719 19:34:15.487578   68536 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 19:34:15.491770   68536 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 19:34:15.491799   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0719 19:34:16.808717   68536 crio.go:462] duration metric: took 1.321185682s to copy over tarball
	I0719 19:34:16.808781   68536 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 19:34:16.850971   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:16.851465   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:16.851497   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:16.851413   69727 retry.go:31] will retry after 734.178646ms: waiting for machine to come up
	I0719 19:34:17.587364   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:17.587763   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:17.587781   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:17.587722   69727 retry.go:31] will retry after 818.269359ms: waiting for machine to come up
	I0719 19:34:18.407178   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:18.407788   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:18.407815   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:18.407741   69727 retry.go:31] will retry after 1.018825067s: waiting for machine to come up
	I0719 19:34:19.427765   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:19.428247   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:19.428277   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:19.428196   69727 retry.go:31] will retry after 1.005679734s: waiting for machine to come up
	I0719 19:34:20.435288   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:20.435661   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:20.435693   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:20.435611   69727 retry.go:31] will retry after 1.648093342s: waiting for machine to come up
	I0719 19:34:19.006661   68536 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.197857661s)
	I0719 19:34:19.006696   68536 crio.go:469] duration metric: took 2.197951458s to extract the tarball
	I0719 19:34:19.006707   68536 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 19:34:19.043352   68536 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:34:19.083293   68536 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 19:34:19.083316   68536 cache_images.go:84] Images are preloaded, skipping loading
	I0719 19:34:19.083333   68536 kubeadm.go:934] updating node { 192.168.72.104 8444 v1.30.3 crio true true} ...
	I0719 19:34:19.083489   68536 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-578533 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-578533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 19:34:19.083593   68536 ssh_runner.go:195] Run: crio config
	I0719 19:34:19.129638   68536 cni.go:84] Creating CNI manager for ""
	I0719 19:34:19.129667   68536 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:34:19.129697   68536 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 19:34:19.129730   68536 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.104 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-578533 NodeName:default-k8s-diff-port-578533 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 19:34:19.129908   68536 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.104
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-578533"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 19:34:19.129996   68536 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 19:34:19.139345   68536 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 19:34:19.139423   68536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 19:34:19.147942   68536 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0719 19:34:19.163170   68536 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 19:34:19.179000   68536 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0719 19:34:19.195196   68536 ssh_runner.go:195] Run: grep 192.168.72.104	control-plane.minikube.internal$ /etc/hosts
	I0719 19:34:19.198941   68536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:34:19.210202   68536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:34:19.326530   68536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:34:19.343318   68536 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533 for IP: 192.168.72.104
	I0719 19:34:19.343352   68536 certs.go:194] generating shared ca certs ...
	I0719 19:34:19.343371   68536 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:34:19.343537   68536 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 19:34:19.343600   68536 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 19:34:19.343614   68536 certs.go:256] generating profile certs ...
	I0719 19:34:19.343739   68536 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/client.key
	I0719 19:34:19.343808   68536 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/apiserver.key.0a2b4bda
	I0719 19:34:19.343876   68536 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/proxy-client.key
	I0719 19:34:19.344051   68536 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 19:34:19.344112   68536 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 19:34:19.344126   68536 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 19:34:19.344155   68536 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 19:34:19.344183   68536 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 19:34:19.344211   68536 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 19:34:19.344270   68536 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:34:19.345114   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 19:34:19.382756   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 19:34:19.414239   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 19:34:19.448517   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 19:34:19.491082   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0719 19:34:19.514344   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 19:34:19.546679   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 19:34:19.569407   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 19:34:19.591638   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 19:34:19.613336   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 19:34:19.635118   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 19:34:19.656675   68536 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 19:34:19.672292   68536 ssh_runner.go:195] Run: openssl version
	I0719 19:34:19.678232   68536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 19:34:19.688472   68536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 19:34:19.692656   68536 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 19:34:19.692714   68536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 19:34:19.698240   68536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 19:34:19.707912   68536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 19:34:19.718120   68536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:34:19.722206   68536 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:34:19.722257   68536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:34:19.727641   68536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 19:34:19.737694   68536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 19:34:19.747354   68536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 19:34:19.751343   68536 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 19:34:19.751389   68536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 19:34:19.756539   68536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 19:34:19.766256   68536 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 19:34:19.770309   68536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 19:34:19.775957   68536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 19:34:19.781384   68536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 19:34:19.786780   68536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 19:34:19.792104   68536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 19:34:19.797536   68536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 19:34:19.802918   68536 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-578533 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-578533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.104 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:34:19.803026   68536 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 19:34:19.803100   68536 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:34:19.837950   68536 cri.go:89] found id: ""
	I0719 19:34:19.838029   68536 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 19:34:19.847639   68536 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 19:34:19.847667   68536 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 19:34:19.847718   68536 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 19:34:19.856631   68536 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 19:34:19.857641   68536 kubeconfig.go:125] found "default-k8s-diff-port-578533" server: "https://192.168.72.104:8444"
	I0719 19:34:19.859821   68536 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 19:34:19.868724   68536 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.104
	I0719 19:34:19.868764   68536 kubeadm.go:1160] stopping kube-system containers ...
	I0719 19:34:19.868778   68536 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 19:34:19.868844   68536 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:34:19.907378   68536 cri.go:89] found id: ""
	I0719 19:34:19.907445   68536 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 19:34:19.923744   68536 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:34:19.933425   68536 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:34:19.933444   68536 kubeadm.go:157] found existing configuration files:
	
	I0719 19:34:19.933508   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0719 19:34:19.942406   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:34:19.942458   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:34:19.951275   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0719 19:34:19.960233   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:34:19.960296   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:34:19.969289   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0719 19:34:19.977401   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:34:19.977458   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:34:19.986298   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0719 19:34:19.994572   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:34:19.994640   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:34:20.003270   68536 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:34:20.012009   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:20.120255   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:21.398914   68536 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.278623256s)
	I0719 19:34:21.398951   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:21.594827   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:21.650132   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:21.717440   68536 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:34:21.717566   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:22.218530   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:22.718618   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:23.218370   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:22.084801   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:22.085201   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:22.085230   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:22.085166   69727 retry.go:31] will retry after 1.686559667s: waiting for machine to come up
	I0719 19:34:23.773486   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:23.774013   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:23.774039   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:23.773982   69727 retry.go:31] will retry after 2.360164076s: waiting for machine to come up
	I0719 19:34:26.136972   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:26.137506   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:26.137528   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:26.137451   69727 retry.go:31] will retry after 2.922244999s: waiting for machine to come up
	I0719 19:34:23.717755   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:24.217721   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:24.232459   68536 api_server.go:72] duration metric: took 2.515020251s to wait for apiserver process to appear ...
	I0719 19:34:24.232486   68536 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:34:24.232513   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:29.061452   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:29.062023   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:29.062056   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:29.061975   69727 retry.go:31] will retry after 3.928406856s: waiting for machine to come up
	I0719 19:34:29.232943   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:34:29.233002   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:34.468635   68995 start.go:364] duration metric: took 3m32.362055744s to acquireMachinesLock for "old-k8s-version-570369"
	I0719 19:34:34.468688   68995 start.go:96] Skipping create...Using existing machine configuration
	I0719 19:34:34.468696   68995 fix.go:54] fixHost starting: 
	I0719 19:34:34.469206   68995 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:34.469246   68995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:34.488843   68995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36477
	I0719 19:34:34.489238   68995 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:34.489682   68995 main.go:141] libmachine: Using API Version  1
	I0719 19:34:34.489708   68995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:34.490007   68995 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:34.490191   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:34.490327   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetState
	I0719 19:34:34.491830   68995 fix.go:112] recreateIfNeeded on old-k8s-version-570369: state=Stopped err=<nil>
	I0719 19:34:34.491875   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	W0719 19:34:34.492029   68995 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 19:34:34.493991   68995 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-570369" ...
	I0719 19:34:32.995044   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:32.995539   68617 main.go:141] libmachine: (embed-certs-889918) Found IP for machine: 192.168.61.107
	I0719 19:34:32.995571   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has current primary IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:32.995583   68617 main.go:141] libmachine: (embed-certs-889918) Reserving static IP address...
	I0719 19:34:32.996234   68617 main.go:141] libmachine: (embed-certs-889918) Reserved static IP address: 192.168.61.107
	I0719 19:34:32.996289   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "embed-certs-889918", mac: "52:54:00:5d:44:ce", ip: "192.168.61.107"} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:32.996409   68617 main.go:141] libmachine: (embed-certs-889918) Waiting for SSH to be available...
	I0719 19:34:32.996438   68617 main.go:141] libmachine: (embed-certs-889918) DBG | skip adding static IP to network mk-embed-certs-889918 - found existing host DHCP lease matching {name: "embed-certs-889918", mac: "52:54:00:5d:44:ce", ip: "192.168.61.107"}
	I0719 19:34:32.996476   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Getting to WaitForSSH function...
	I0719 19:34:32.998802   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:32.999144   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:32.999166   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:32.999286   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Using SSH client type: external
	I0719 19:34:32.999310   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Using SSH private key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa (-rw-------)
	I0719 19:34:32.999352   68617 main.go:141] libmachine: (embed-certs-889918) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 19:34:32.999371   68617 main.go:141] libmachine: (embed-certs-889918) DBG | About to run SSH command:
	I0719 19:34:32.999404   68617 main.go:141] libmachine: (embed-certs-889918) DBG | exit 0
	I0719 19:34:33.128231   68617 main.go:141] libmachine: (embed-certs-889918) DBG | SSH cmd err, output: <nil>: 
	I0719 19:34:33.128662   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetConfigRaw
	I0719 19:34:33.129434   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetIP
	I0719 19:34:33.132282   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.132578   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.132613   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.132825   68617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/config.json ...
	I0719 19:34:33.133051   68617 machine.go:94] provisionDockerMachine start ...
	I0719 19:34:33.133071   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:33.133272   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:33.135294   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.135617   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.135645   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.135727   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:33.135898   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.136064   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.136208   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:33.136358   68617 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:33.136534   68617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0719 19:34:33.136545   68617 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 19:34:33.247702   68617 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 19:34:33.247736   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetMachineName
	I0719 19:34:33.248042   68617 buildroot.go:166] provisioning hostname "embed-certs-889918"
	I0719 19:34:33.248072   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetMachineName
	I0719 19:34:33.248288   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:33.251010   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.251506   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.251536   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.251739   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:33.251936   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.252136   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.252329   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:33.252586   68617 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:33.252822   68617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0719 19:34:33.252848   68617 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-889918 && echo "embed-certs-889918" | sudo tee /etc/hostname
	I0719 19:34:33.379987   68617 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-889918
	
	I0719 19:34:33.380013   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:33.382728   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.383037   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.383059   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.383212   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:33.383482   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.383662   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.383805   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:33.384004   68617 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:33.384193   68617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0719 19:34:33.384211   68617 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-889918' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-889918/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-889918' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 19:34:33.503798   68617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:34:33.503827   68617 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 19:34:33.503870   68617 buildroot.go:174] setting up certificates
	I0719 19:34:33.503884   68617 provision.go:84] configureAuth start
	I0719 19:34:33.503896   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetMachineName
	I0719 19:34:33.504125   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetIP
	I0719 19:34:33.506371   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.506777   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.506809   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.506914   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:33.509368   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.509751   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.509783   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.509892   68617 provision.go:143] copyHostCerts
	I0719 19:34:33.509957   68617 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 19:34:33.509967   68617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 19:34:33.510035   68617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 19:34:33.510150   68617 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 19:34:33.510163   68617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 19:34:33.510194   68617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 19:34:33.510269   68617 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 19:34:33.510280   68617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 19:34:33.510306   68617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 19:34:33.510369   68617 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.embed-certs-889918 san=[127.0.0.1 192.168.61.107 embed-certs-889918 localhost minikube]
	I0719 19:34:33.799515   68617 provision.go:177] copyRemoteCerts
	I0719 19:34:33.799571   68617 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 19:34:33.799595   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:33.802495   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.802789   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.802817   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.803020   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:33.803245   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.803453   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:33.803618   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:33.889555   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 19:34:33.912176   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0719 19:34:33.934472   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 19:34:33.956877   68617 provision.go:87] duration metric: took 452.977402ms to configureAuth
	I0719 19:34:33.956906   68617 buildroot.go:189] setting minikube options for container-runtime
	I0719 19:34:33.957113   68617 config.go:182] Loaded profile config "embed-certs-889918": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:34:33.957190   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:33.960120   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.960551   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.960583   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.960761   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:33.960948   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.961094   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.961221   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:33.961368   68617 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:33.961587   68617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0719 19:34:33.961608   68617 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 19:34:34.222464   68617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 19:34:34.222508   68617 machine.go:97] duration metric: took 1.089428018s to provisionDockerMachine
	I0719 19:34:34.222519   68617 start.go:293] postStartSetup for "embed-certs-889918" (driver="kvm2")
	I0719 19:34:34.222530   68617 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 19:34:34.222544   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:34.222899   68617 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 19:34:34.222924   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:34.225636   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.226063   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:34.226093   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.226259   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:34.226491   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:34.226686   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:34.226918   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:34.314782   68617 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 19:34:34.318940   68617 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 19:34:34.318966   68617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 19:34:34.319044   68617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 19:34:34.319122   68617 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 19:34:34.319223   68617 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 19:34:34.328873   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:34:34.351760   68617 start.go:296] duration metric: took 129.22587ms for postStartSetup
	I0719 19:34:34.351804   68617 fix.go:56] duration metric: took 20.367059026s for fixHost
	I0719 19:34:34.351821   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:34.354759   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.355195   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:34.355228   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.355405   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:34.355619   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:34.355821   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:34.355994   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:34.356154   68617 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:34.356371   68617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0719 19:34:34.356386   68617 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 19:34:34.468486   68617 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721417674.441309314
	
	I0719 19:34:34.468508   68617 fix.go:216] guest clock: 1721417674.441309314
	I0719 19:34:34.468514   68617 fix.go:229] Guest: 2024-07-19 19:34:34.441309314 +0000 UTC Remote: 2024-07-19 19:34:34.35180736 +0000 UTC m=+267.913178040 (delta=89.501954ms)
	I0719 19:34:34.468531   68617 fix.go:200] guest clock delta is within tolerance: 89.501954ms
	I0719 19:34:34.468535   68617 start.go:83] releasing machines lock for "embed-certs-889918", held for 20.483881359s
	I0719 19:34:34.468559   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:34.468844   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetIP
	I0719 19:34:34.471920   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.472311   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:34.472346   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.472517   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:34.473026   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:34.473229   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:34.473305   68617 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 19:34:34.473364   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:34.473484   68617 ssh_runner.go:195] Run: cat /version.json
	I0719 19:34:34.473513   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:34.476087   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.476262   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.476545   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:34.476571   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.476614   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:34.476834   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:34.476835   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.476988   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:34.477070   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:34.477226   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:34.477293   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:34.477383   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:34.477453   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:34.477536   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:34.560612   68617 ssh_runner.go:195] Run: systemctl --version
	I0719 19:34:34.593072   68617 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 19:34:34.734988   68617 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 19:34:34.741527   68617 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 19:34:34.741637   68617 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 19:34:34.758845   68617 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 19:34:34.758869   68617 start.go:495] detecting cgroup driver to use...
	I0719 19:34:34.758924   68617 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 19:34:34.774350   68617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 19:34:34.787003   68617 docker.go:217] disabling cri-docker service (if available) ...
	I0719 19:34:34.787074   68617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 19:34:34.801080   68617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 19:34:34.815679   68617 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 19:34:34.940970   68617 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 19:34:35.101668   68617 docker.go:233] disabling docker service ...
	I0719 19:34:35.101744   68617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 19:34:35.116074   68617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 19:34:35.132085   68617 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 19:34:35.254387   68617 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 19:34:35.380270   68617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 19:34:35.394387   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 19:34:35.411794   68617 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 19:34:35.411877   68617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.421927   68617 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 19:34:35.421978   68617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.432790   68617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.445416   68617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.457170   68617 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 19:34:35.467762   68617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.478775   68617 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.498894   68617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.509643   68617 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 19:34:35.520089   68617 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 19:34:35.520138   68617 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 19:34:35.532934   68617 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 19:34:35.543249   68617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:34:35.702678   68617 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 19:34:35.843858   68617 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 19:34:35.843944   68617 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 19:34:35.848599   68617 start.go:563] Will wait 60s for crictl version
	I0719 19:34:35.848677   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:34:35.852436   68617 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 19:34:35.889698   68617 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 19:34:35.889782   68617 ssh_runner.go:195] Run: crio --version
	I0719 19:34:35.917297   68617 ssh_runner.go:195] Run: crio --version
	I0719 19:34:35.948248   68617 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 19:34:35.949617   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetIP
	I0719 19:34:35.952938   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:35.953304   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:35.953340   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:35.953554   68617 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0719 19:34:35.957815   68617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:34:35.971246   68617 kubeadm.go:883] updating cluster {Name:embed-certs-889918 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-889918 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 19:34:35.971572   68617 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 19:34:35.971628   68617 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:34:36.012019   68617 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0719 19:34:36.012093   68617 ssh_runner.go:195] Run: which lz4
	I0719 19:34:36.017026   68617 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 19:34:36.021166   68617 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 19:34:36.021200   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0719 19:34:34.495371   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .Start
	I0719 19:34:34.495553   68995 main.go:141] libmachine: (old-k8s-version-570369) Ensuring networks are active...
	I0719 19:34:34.496391   68995 main.go:141] libmachine: (old-k8s-version-570369) Ensuring network default is active
	I0719 19:34:34.496864   68995 main.go:141] libmachine: (old-k8s-version-570369) Ensuring network mk-old-k8s-version-570369 is active
	I0719 19:34:34.497347   68995 main.go:141] libmachine: (old-k8s-version-570369) Getting domain xml...
	I0719 19:34:34.498147   68995 main.go:141] libmachine: (old-k8s-version-570369) Creating domain...
	I0719 19:34:35.772867   68995 main.go:141] libmachine: (old-k8s-version-570369) Waiting to get IP...
	I0719 19:34:35.773913   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:35.774431   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:35.774493   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:35.774403   69904 retry.go:31] will retry after 240.258158ms: waiting for machine to come up
	I0719 19:34:36.016002   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:36.016573   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:36.016602   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:36.016519   69904 retry.go:31] will retry after 386.689334ms: waiting for machine to come up
	I0719 19:34:36.405411   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:36.405914   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:36.405957   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:36.405885   69904 retry.go:31] will retry after 295.472664ms: waiting for machine to come up
	I0719 19:34:36.703587   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:36.704082   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:36.704113   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:36.704040   69904 retry.go:31] will retry after 436.936059ms: waiting for machine to come up
	I0719 19:34:34.233826   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:34:34.233873   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:37.351373   68617 crio.go:462] duration metric: took 1.334375789s to copy over tarball
	I0719 19:34:37.351439   68617 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 19:34:39.633138   68617 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.281647989s)
	I0719 19:34:39.633178   68617 crio.go:469] duration metric: took 2.281778927s to extract the tarball
	I0719 19:34:39.633187   68617 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 19:34:39.671219   68617 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:34:39.716854   68617 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 19:34:39.716878   68617 cache_images.go:84] Images are preloaded, skipping loading
	I0719 19:34:39.716885   68617 kubeadm.go:934] updating node { 192.168.61.107 8443 v1.30.3 crio true true} ...
	I0719 19:34:39.717014   68617 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-889918 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-889918 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 19:34:39.717090   68617 ssh_runner.go:195] Run: crio config
	I0719 19:34:39.764571   68617 cni.go:84] Creating CNI manager for ""
	I0719 19:34:39.764597   68617 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:34:39.764610   68617 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 19:34:39.764633   68617 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.107 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-889918 NodeName:embed-certs-889918 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 19:34:39.764796   68617 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-889918"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 19:34:39.764873   68617 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 19:34:39.774416   68617 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 19:34:39.774479   68617 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 19:34:39.783899   68617 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0719 19:34:39.799538   68617 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 19:34:39.815241   68617 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0719 19:34:39.831507   68617 ssh_runner.go:195] Run: grep 192.168.61.107	control-plane.minikube.internal$ /etc/hosts
	I0719 19:34:39.835145   68617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:34:39.846125   68617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:34:39.958886   68617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:34:39.975440   68617 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918 for IP: 192.168.61.107
	I0719 19:34:39.975469   68617 certs.go:194] generating shared ca certs ...
	I0719 19:34:39.975488   68617 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:34:39.975688   68617 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 19:34:39.975742   68617 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 19:34:39.975755   68617 certs.go:256] generating profile certs ...
	I0719 19:34:39.975891   68617 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/client.key
	I0719 19:34:39.975975   68617 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/apiserver.key.edbb2bee
	I0719 19:34:39.976027   68617 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/proxy-client.key
	I0719 19:34:39.976166   68617 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 19:34:39.976206   68617 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 19:34:39.976214   68617 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 19:34:39.976242   68617 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 19:34:39.976267   68617 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 19:34:39.976317   68617 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 19:34:39.976368   68617 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:34:39.976983   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 19:34:40.011234   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 19:34:40.057204   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 19:34:40.079930   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 19:34:40.102045   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0719 19:34:40.124479   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 19:34:40.153087   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 19:34:40.183427   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 19:34:40.206893   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 19:34:40.230582   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 19:34:40.254384   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 19:34:40.278454   68617 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 19:34:40.297538   68617 ssh_runner.go:195] Run: openssl version
	I0719 19:34:40.303579   68617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 19:34:40.315682   68617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 19:34:40.320283   68617 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 19:34:40.320356   68617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 19:34:40.326353   68617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 19:34:40.338564   68617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 19:34:40.349901   68617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:34:40.354229   68617 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:34:40.354291   68617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:34:40.359773   68617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 19:34:40.370212   68617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 19:34:40.383548   68617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 19:34:40.388929   68617 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 19:34:40.388981   68617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 19:34:40.396108   68617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 19:34:40.409065   68617 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 19:34:40.413154   68617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 19:34:40.418763   68617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 19:34:40.426033   68617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 19:34:40.431826   68617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 19:34:40.437469   68617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 19:34:40.443195   68617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 19:34:40.448932   68617 kubeadm.go:392] StartCluster: {Name:embed-certs-889918 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-889918 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:34:40.449086   68617 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 19:34:40.449153   68617 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:34:40.483766   68617 cri.go:89] found id: ""
	I0719 19:34:40.483867   68617 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 19:34:40.493593   68617 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 19:34:40.493618   68617 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 19:34:40.493668   68617 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 19:34:40.502750   68617 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 19:34:40.503917   68617 kubeconfig.go:125] found "embed-certs-889918" server: "https://192.168.61.107:8443"
	I0719 19:34:40.505701   68617 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 19:34:40.516036   68617 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.107
	I0719 19:34:40.516067   68617 kubeadm.go:1160] stopping kube-system containers ...
	I0719 19:34:40.516077   68617 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 19:34:40.516144   68617 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:34:40.562686   68617 cri.go:89] found id: ""
	I0719 19:34:40.562771   68617 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 19:34:40.578349   68617 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:34:40.587316   68617 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:34:40.587349   68617 kubeadm.go:157] found existing configuration files:
	
	I0719 19:34:40.587421   68617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:34:40.597112   68617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:34:40.597165   68617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:34:40.606941   68617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:34:40.616452   68617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:34:40.616495   68617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:34:40.626308   68617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:34:40.635642   68617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:34:40.635683   68617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:34:40.645499   68617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:34:40.655029   68617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:34:40.655080   68617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:34:40.665113   68617 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:34:40.673843   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:40.785511   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:37.142840   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:37.143334   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:37.143364   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:37.143287   69904 retry.go:31] will retry after 500.03346ms: waiting for machine to come up
	I0719 19:34:37.645270   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:37.645920   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:37.645949   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:37.645856   69904 retry.go:31] will retry after 803.488078ms: waiting for machine to come up
	I0719 19:34:38.450546   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:38.451043   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:38.451062   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:38.451000   69904 retry.go:31] will retry after 791.584627ms: waiting for machine to come up
	I0719 19:34:39.244154   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:39.244564   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:39.244597   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:39.244541   69904 retry.go:31] will retry after 1.090004215s: waiting for machine to come up
	I0719 19:34:40.335726   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:40.336278   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:40.336304   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:40.336240   69904 retry.go:31] will retry after 1.629231127s: waiting for machine to come up
	I0719 19:34:41.966947   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:41.967475   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:41.967497   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:41.967436   69904 retry.go:31] will retry after 2.06824211s: waiting for machine to come up
	I0719 19:34:39.234200   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:34:39.234245   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:41.592081   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:41.803419   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:41.898042   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:42.005198   68617 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:34:42.005290   68617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:42.505705   68617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:43.005532   68617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:43.505668   68617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:43.519708   68617 api_server.go:72] duration metric: took 1.514509053s to wait for apiserver process to appear ...
	I0719 19:34:43.519740   68617 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:34:43.519763   68617 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0719 19:34:46.288947   68617 api_server.go:279] https://192.168.61.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:34:46.288983   68617 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:34:46.289000   68617 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0719 19:34:46.306114   68617 api_server.go:279] https://192.168.61.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:34:46.306137   68617 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:34:44.038853   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:44.039330   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:44.039352   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:44.039303   69904 retry.go:31] will retry after 2.688374782s: waiting for machine to come up
	I0719 19:34:46.730865   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:46.731354   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:46.731377   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:46.731316   69904 retry.go:31] will retry after 3.054833322s: waiting for machine to come up
	I0719 19:34:46.519995   68617 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0719 19:34:46.524219   68617 api_server.go:279] https://192.168.61.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:34:46.524260   68617 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:34:47.020849   68617 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0719 19:34:47.029117   68617 api_server.go:279] https://192.168.61.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:34:47.029145   68617 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:34:47.520815   68617 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0719 19:34:47.525739   68617 api_server.go:279] https://192.168.61.107:8443/healthz returned 200:
	ok
	I0719 19:34:47.532240   68617 api_server.go:141] control plane version: v1.30.3
	I0719 19:34:47.532277   68617 api_server.go:131] duration metric: took 4.012528013s to wait for apiserver health ...
	I0719 19:34:47.532290   68617 cni.go:84] Creating CNI manager for ""
	I0719 19:34:47.532300   68617 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:34:47.533969   68617 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 19:34:44.235220   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:34:44.235292   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:44.372339   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": read tcp 192.168.72.1:52608->192.168.72.104:8444: read: connection reset by peer
	I0719 19:34:44.732722   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:44.733383   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": dial tcp 192.168.72.104:8444: connect: connection refused
	I0719 19:34:45.232769   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:47.535361   68617 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 19:34:47.545742   68617 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 19:34:47.570944   68617 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:34:47.582007   68617 system_pods.go:59] 8 kube-system pods found
	I0719 19:34:47.582046   68617 system_pods.go:61] "coredns-7db6d8ff4d-7h48w" [dd691a13-e571-4148-bef0-ba0552277312] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 19:34:47.582065   68617 system_pods.go:61] "etcd-embed-certs-889918" [68eae3c2-11de-4c70-bc28-1f34e1e868fc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 19:34:47.582075   68617 system_pods.go:61] "kube-apiserver-embed-certs-889918" [48f50846-bc9e-444e-a569-98d28550dcff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 19:34:47.582083   68617 system_pods.go:61] "kube-controller-manager-embed-certs-889918" [92e4da69-acef-44a2-9301-726c70294727] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 19:34:47.582091   68617 system_pods.go:61] "kube-proxy-5fcgs" [67ef345b-f2ef-4e18-a55c-91e2c08a37b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0719 19:34:47.582097   68617 system_pods.go:61] "kube-scheduler-embed-certs-889918" [3560778a-031a-486f-9f53-77c243737daf] Running
	I0719 19:34:47.582113   68617 system_pods.go:61] "metrics-server-569cc877fc-sfc85" [fc2a1efe-1728-42d9-bd3d-9eb29af783e9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:34:47.582121   68617 system_pods.go:61] "storage-provisioner" [32274055-6a2d-4cf9-8ae8-3c5f7cdc66db] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 19:34:47.582131   68617 system_pods.go:74] duration metric: took 11.164178ms to wait for pod list to return data ...
	I0719 19:34:47.582153   68617 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:34:47.588678   68617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:34:47.588713   68617 node_conditions.go:123] node cpu capacity is 2
	I0719 19:34:47.588732   68617 node_conditions.go:105] duration metric: took 6.572921ms to run NodePressure ...
	I0719 19:34:47.588751   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:47.869129   68617 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 19:34:47.875206   68617 kubeadm.go:739] kubelet initialised
	I0719 19:34:47.875228   68617 kubeadm.go:740] duration metric: took 6.075227ms waiting for restarted kubelet to initialise ...
	I0719 19:34:47.875239   68617 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:34:47.883454   68617 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:47.889101   68617 pod_ready.go:97] node "embed-certs-889918" hosting pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.889128   68617 pod_ready.go:81] duration metric: took 5.645208ms for pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:47.889149   68617 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-889918" hosting pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.889167   68617 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:47.894286   68617 pod_ready.go:97] node "embed-certs-889918" hosting pod "etcd-embed-certs-889918" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.894312   68617 pod_ready.go:81] duration metric: took 5.135667ms for pod "etcd-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:47.894321   68617 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-889918" hosting pod "etcd-embed-certs-889918" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.894327   68617 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:47.899401   68617 pod_ready.go:97] node "embed-certs-889918" hosting pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.899428   68617 pod_ready.go:81] duration metric: took 5.092368ms for pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:47.899435   68617 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-889918" hosting pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.899441   68617 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:47.976133   68617 pod_ready.go:97] node "embed-certs-889918" hosting pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.976170   68617 pod_ready.go:81] duration metric: took 76.718663ms for pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:47.976183   68617 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-889918" hosting pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.976193   68617 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5fcgs" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:48.374196   68617 pod_ready.go:97] node "embed-certs-889918" hosting pod "kube-proxy-5fcgs" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:48.374232   68617 pod_ready.go:81] duration metric: took 398.025945ms for pod "kube-proxy-5fcgs" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:48.374242   68617 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-889918" hosting pod "kube-proxy-5fcgs" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:48.374250   68617 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:48.574541   68617 pod_ready.go:97] error getting pod "kube-scheduler-embed-certs-889918" in "kube-system" namespace (skipping!): pods "kube-scheduler-embed-certs-889918" not found
	I0719 19:34:48.574575   68617 pod_ready.go:81] duration metric: took 200.319358ms for pod "kube-scheduler-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:48.574585   68617 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-embed-certs-889918" in "kube-system" namespace (skipping!): pods "kube-scheduler-embed-certs-889918" not found
	I0719 19:34:48.574591   68617 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:48.974614   68617 pod_ready.go:97] node "embed-certs-889918" hosting pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:48.974640   68617 pod_ready.go:81] duration metric: took 400.042539ms for pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:48.974648   68617 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-889918" hosting pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:48.974655   68617 pod_ready.go:38] duration metric: took 1.099407444s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:34:48.974672   68617 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 19:34:48.985044   68617 ops.go:34] apiserver oom_adj: -16
	I0719 19:34:48.985072   68617 kubeadm.go:597] duration metric: took 8.491446525s to restartPrimaryControlPlane
	I0719 19:34:48.985084   68617 kubeadm.go:394] duration metric: took 8.536160393s to StartCluster
	I0719 19:34:48.985117   68617 settings.go:142] acquiring lock: {Name:mkcf95271f2ac91a55c2f4ae0f8a0ccaa24777be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:34:48.985216   68617 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:34:48.986883   68617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/kubeconfig: {Name:mk0bbb5f0de0e816a151363ad4427bbf9de52f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:34:48.987110   68617 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 19:34:48.987179   68617 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 19:34:48.987252   68617 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-889918"
	I0719 19:34:48.987271   68617 addons.go:69] Setting default-storageclass=true in profile "embed-certs-889918"
	I0719 19:34:48.987287   68617 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-889918"
	W0719 19:34:48.987297   68617 addons.go:243] addon storage-provisioner should already be in state true
	I0719 19:34:48.987303   68617 addons.go:69] Setting metrics-server=true in profile "embed-certs-889918"
	I0719 19:34:48.987319   68617 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-889918"
	I0719 19:34:48.987331   68617 config.go:182] Loaded profile config "embed-certs-889918": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:34:48.987352   68617 host.go:66] Checking if "embed-certs-889918" exists ...
	I0719 19:34:48.987335   68617 addons.go:234] Setting addon metrics-server=true in "embed-certs-889918"
	W0719 19:34:48.987368   68617 addons.go:243] addon metrics-server should already be in state true
	I0719 19:34:48.987402   68617 host.go:66] Checking if "embed-certs-889918" exists ...
	I0719 19:34:48.987647   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:48.987684   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:48.987692   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:48.987701   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:48.987720   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:48.987723   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:48.988905   68617 out.go:177] * Verifying Kubernetes components...
	I0719 19:34:48.990196   68617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:34:49.003003   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38653
	I0719 19:34:49.003016   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46849
	I0719 19:34:49.003134   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38559
	I0719 19:34:49.003399   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.003454   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.003518   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.003947   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.003967   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.004007   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.004023   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.004100   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.004115   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.004295   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.004351   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.004444   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.004512   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetState
	I0719 19:34:49.004785   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:49.004819   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:49.005005   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:49.005037   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:49.008189   68617 addons.go:234] Setting addon default-storageclass=true in "embed-certs-889918"
	W0719 19:34:49.008212   68617 addons.go:243] addon default-storageclass should already be in state true
	I0719 19:34:49.008242   68617 host.go:66] Checking if "embed-certs-889918" exists ...
	I0719 19:34:49.008617   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:49.008649   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:49.020066   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38847
	I0719 19:34:49.020085   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40037
	I0719 19:34:49.020445   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.020502   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.020882   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.020899   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.021021   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.021049   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.021278   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.021313   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.021473   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetState
	I0719 19:34:49.021511   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetState
	I0719 19:34:49.023274   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:49.023416   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:49.025350   68617 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:34:49.025361   68617 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0719 19:34:49.025824   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32781
	I0719 19:34:49.026239   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.026584   68617 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 19:34:49.026603   68617 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 19:34:49.026621   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:49.026679   68617 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:34:49.026696   68617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 19:34:49.026712   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:49.027527   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.027544   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.027893   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.028407   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:49.028439   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:49.030553   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:49.030579   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:49.031224   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:49.031246   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:49.031274   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:49.031279   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:49.031298   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:49.031375   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:49.031477   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:49.031659   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:49.031717   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:49.031867   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:49.032146   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:49.032524   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:49.043850   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37515
	I0719 19:34:49.044198   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.044657   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.044679   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.045019   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.045222   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetState
	I0719 19:34:49.046751   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:49.046979   68617 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 19:34:49.046997   68617 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 19:34:49.047014   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:49.049718   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:49.050184   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:49.050210   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:49.050371   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:49.050520   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:49.050670   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:49.050852   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:49.167388   68617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:34:49.183694   68617 node_ready.go:35] waiting up to 6m0s for node "embed-certs-889918" to be "Ready" ...
	I0719 19:34:49.290895   68617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:34:49.298027   68617 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 19:34:49.298047   68617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0719 19:34:49.313881   68617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 19:34:49.319432   68617 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 19:34:49.319459   68617 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 19:34:49.357567   68617 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 19:34:49.357591   68617 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 19:34:49.385245   68617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 19:34:50.267101   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.267126   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.267145   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.267136   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.267209   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.267226   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.267407   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.267419   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.267427   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.267434   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.267506   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Closing plugin on server side
	I0719 19:34:50.267527   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Closing plugin on server side
	I0719 19:34:50.267546   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.267553   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.267561   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.267567   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.267576   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.267601   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.267619   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.267630   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.267630   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Closing plugin on server side
	I0719 19:34:50.267652   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.267660   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.267890   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Closing plugin on server side
	I0719 19:34:50.267920   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.267933   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.267945   68617 addons.go:475] Verifying addon metrics-server=true in "embed-certs-889918"
	I0719 19:34:50.268025   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Closing plugin on server side
	I0719 19:34:50.268046   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.268053   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.273372   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.273398   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.273607   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.273621   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.275547   68617 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0719 19:34:50.276813   68617 addons.go:510] duration metric: took 1.289644945s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0719 19:34:51.187125   68617 node_ready.go:53] node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:49.787569   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:49.788100   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:49.788136   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:49.788058   69904 retry.go:31] will retry after 4.206772692s: waiting for machine to come up
	I0719 19:34:50.233851   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:34:50.233900   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:55.468495   68019 start.go:364] duration metric: took 55.077199048s to acquireMachinesLock for "no-preload-971041"
	I0719 19:34:55.468549   68019 start.go:96] Skipping create...Using existing machine configuration
	I0719 19:34:55.468561   68019 fix.go:54] fixHost starting: 
	I0719 19:34:55.469011   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:55.469045   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:55.488854   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32977
	I0719 19:34:55.489237   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:55.489746   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:34:55.489769   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:55.490164   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:55.490353   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:34:55.490535   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetState
	I0719 19:34:55.492080   68019 fix.go:112] recreateIfNeeded on no-preload-971041: state=Stopped err=<nil>
	I0719 19:34:55.492100   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	W0719 19:34:55.492240   68019 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 19:34:55.494051   68019 out.go:177] * Restarting existing kvm2 VM for "no-preload-971041" ...
	I0719 19:34:53.189961   68617 node_ready.go:53] node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:55.688298   68617 node_ready.go:53] node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:53.999402   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.000105   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has current primary IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.000129   68995 main.go:141] libmachine: (old-k8s-version-570369) Found IP for machine: 192.168.39.220
	I0719 19:34:54.000144   68995 main.go:141] libmachine: (old-k8s-version-570369) Reserving static IP address...
	I0719 19:34:54.000535   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "old-k8s-version-570369", mac: "52:54:00:80:e3:2b", ip: "192.168.39.220"} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.000738   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | skip adding static IP to network mk-old-k8s-version-570369 - found existing host DHCP lease matching {name: "old-k8s-version-570369", mac: "52:54:00:80:e3:2b", ip: "192.168.39.220"}
	I0719 19:34:54.000786   68995 main.go:141] libmachine: (old-k8s-version-570369) Reserved static IP address: 192.168.39.220
	I0719 19:34:54.000799   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | Getting to WaitForSSH function...
	I0719 19:34:54.000819   68995 main.go:141] libmachine: (old-k8s-version-570369) Waiting for SSH to be available...
	I0719 19:34:54.003082   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.003489   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.003519   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.003700   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | Using SSH client type: external
	I0719 19:34:54.003733   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | Using SSH private key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa (-rw-------)
	I0719 19:34:54.003769   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.220 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 19:34:54.003786   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | About to run SSH command:
	I0719 19:34:54.003802   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | exit 0
	I0719 19:34:54.131809   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | SSH cmd err, output: <nil>: 
	I0719 19:34:54.132235   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetConfigRaw
	I0719 19:34:54.132946   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetIP
	I0719 19:34:54.135584   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.135977   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.136011   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.136256   68995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/config.json ...
	I0719 19:34:54.136443   68995 machine.go:94] provisionDockerMachine start ...
	I0719 19:34:54.136465   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:54.136815   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.139347   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.139705   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.139733   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.139951   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:54.140132   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.140272   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.140567   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:54.140754   68995 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:54.141006   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:34:54.141018   68995 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 19:34:54.251997   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 19:34:54.252035   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetMachineName
	I0719 19:34:54.252283   68995 buildroot.go:166] provisioning hostname "old-k8s-version-570369"
	I0719 19:34:54.252310   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetMachineName
	I0719 19:34:54.252568   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.255414   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.255812   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.255860   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.256041   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:54.256232   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.256399   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.256586   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:54.256793   68995 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:54.257004   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:34:54.257019   68995 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-570369 && echo "old-k8s-version-570369" | sudo tee /etc/hostname
	I0719 19:34:54.381363   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-570369
	
	I0719 19:34:54.381405   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.384406   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.384697   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.384724   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.384866   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:54.385050   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.385226   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.385399   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:54.385554   68995 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:54.385735   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:34:54.385752   68995 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-570369' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-570369/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-570369' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 19:34:54.504713   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:34:54.504740   68995 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 19:34:54.504778   68995 buildroot.go:174] setting up certificates
	I0719 19:34:54.504786   68995 provision.go:84] configureAuth start
	I0719 19:34:54.504796   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetMachineName
	I0719 19:34:54.505040   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetIP
	I0719 19:34:54.507978   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.508375   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.508407   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.508594   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.510609   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.510923   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.510952   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.511003   68995 provision.go:143] copyHostCerts
	I0719 19:34:54.511068   68995 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 19:34:54.511079   68995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 19:34:54.511147   68995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 19:34:54.511250   68995 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 19:34:54.511260   68995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 19:34:54.511290   68995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 19:34:54.511365   68995 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 19:34:54.511374   68995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 19:34:54.511404   68995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 19:34:54.511470   68995 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-570369 san=[127.0.0.1 192.168.39.220 localhost minikube old-k8s-version-570369]
	I0719 19:34:54.759014   68995 provision.go:177] copyRemoteCerts
	I0719 19:34:54.759064   68995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 19:34:54.759089   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.761770   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.762089   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.762133   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.762262   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:54.762483   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.762633   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:54.762744   68995 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa Username:docker}
	I0719 19:34:54.849819   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0719 19:34:54.874113   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 19:34:54.897979   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 19:34:54.921635   68995 provision.go:87] duration metric: took 416.834667ms to configureAuth
	I0719 19:34:54.921669   68995 buildroot.go:189] setting minikube options for container-runtime
	I0719 19:34:54.921904   68995 config.go:182] Loaded profile config "old-k8s-version-570369": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0719 19:34:54.922004   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.924798   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.925091   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.925121   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.925308   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:54.925611   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.925801   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.925980   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:54.926180   68995 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:54.926442   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:34:54.926471   68995 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 19:34:55.219746   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 19:34:55.219774   68995 machine.go:97] duration metric: took 1.083316626s to provisionDockerMachine
	I0719 19:34:55.219788   68995 start.go:293] postStartSetup for "old-k8s-version-570369" (driver="kvm2")
	I0719 19:34:55.219802   68995 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 19:34:55.219823   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:55.220207   68995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 19:34:55.220250   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:55.223470   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.223803   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:55.223847   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.224015   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:55.224253   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:55.224507   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:55.224686   68995 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa Username:docker}
	I0719 19:34:55.314430   68995 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 19:34:55.318435   68995 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 19:34:55.318461   68995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 19:34:55.318573   68995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 19:34:55.318696   68995 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 19:34:55.318818   68995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 19:34:55.329180   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:34:55.351452   68995 start.go:296] duration metric: took 131.650561ms for postStartSetup
	I0719 19:34:55.351493   68995 fix.go:56] duration metric: took 20.882796261s for fixHost
	I0719 19:34:55.351516   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:55.354069   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.354429   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:55.354449   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.354612   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:55.354786   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:55.354936   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:55.355083   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:55.355295   68995 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:55.355516   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:34:55.355528   68995 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 19:34:55.468304   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721417695.444458011
	
	I0719 19:34:55.468347   68995 fix.go:216] guest clock: 1721417695.444458011
	I0719 19:34:55.468357   68995 fix.go:229] Guest: 2024-07-19 19:34:55.444458011 +0000 UTC Remote: 2024-07-19 19:34:55.351498388 +0000 UTC m=+233.383860409 (delta=92.959623ms)
	I0719 19:34:55.468400   68995 fix.go:200] guest clock delta is within tolerance: 92.959623ms
	I0719 19:34:55.468407   68995 start.go:83] releasing machines lock for "old-k8s-version-570369", held for 20.999740081s
	I0719 19:34:55.468436   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:55.468696   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetIP
	I0719 19:34:55.471650   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.472087   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:55.472126   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.472376   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:55.472870   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:55.473063   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:55.473145   68995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 19:34:55.473190   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:55.473364   68995 ssh_runner.go:195] Run: cat /version.json
	I0719 19:34:55.473392   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:55.476321   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.476682   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.476716   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:55.476736   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.476847   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:55.477024   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:55.477123   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:55.477149   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.477183   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:55.477291   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:55.477341   68995 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa Username:docker}
	I0719 19:34:55.477440   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:55.477555   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:55.477729   68995 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa Username:docker}
	I0719 19:34:55.573427   68995 ssh_runner.go:195] Run: systemctl --version
	I0719 19:34:55.602019   68995 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 19:34:55.759356   68995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 19:34:55.766310   68995 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 19:34:55.766406   68995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 19:34:55.786237   68995 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 19:34:55.786261   68995 start.go:495] detecting cgroup driver to use...
	I0719 19:34:55.786335   68995 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 19:34:55.804512   68995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 19:34:55.818026   68995 docker.go:217] disabling cri-docker service (if available) ...
	I0719 19:34:55.818081   68995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 19:34:55.832737   68995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 19:34:55.848018   68995 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 19:34:55.972092   68995 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 19:34:56.119305   68995 docker.go:233] disabling docker service ...
	I0719 19:34:56.119390   68995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 19:34:56.132814   68995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 19:34:56.145281   68995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 19:34:56.285671   68995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 19:34:56.413786   68995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 19:34:56.428220   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 19:34:56.447626   68995 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0719 19:34:56.447679   68995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:56.458325   68995 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 19:34:56.458419   68995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:56.467977   68995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:56.477127   68995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:56.487342   68995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 19:34:56.499618   68995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 19:34:56.510085   68995 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 19:34:56.510149   68995 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 19:34:56.524045   68995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 19:34:56.533806   68995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:34:56.671207   68995 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 19:34:56.844623   68995 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 19:34:56.844688   68995 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 19:34:56.849762   68995 start.go:563] Will wait 60s for crictl version
	I0719 19:34:56.849829   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:34:56.853768   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 19:34:56.896644   68995 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 19:34:56.896768   68995 ssh_runner.go:195] Run: crio --version
	I0719 19:34:56.924343   68995 ssh_runner.go:195] Run: crio --version
	I0719 19:34:56.952939   68995 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0719 19:34:56.954228   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetIP
	I0719 19:34:56.957173   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:56.957539   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:56.957571   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:56.957784   68995 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 19:34:56.961877   68995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:34:56.973917   68995 kubeadm.go:883] updating cluster {Name:old-k8s-version-570369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-570369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 19:34:56.974058   68995 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 19:34:56.974286   68995 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:34:55.495295   68019 main.go:141] libmachine: (no-preload-971041) Calling .Start
	I0719 19:34:55.495475   68019 main.go:141] libmachine: (no-preload-971041) Ensuring networks are active...
	I0719 19:34:55.496223   68019 main.go:141] libmachine: (no-preload-971041) Ensuring network default is active
	I0719 19:34:55.496554   68019 main.go:141] libmachine: (no-preload-971041) Ensuring network mk-no-preload-971041 is active
	I0719 19:34:55.496971   68019 main.go:141] libmachine: (no-preload-971041) Getting domain xml...
	I0719 19:34:55.497761   68019 main.go:141] libmachine: (no-preload-971041) Creating domain...
	I0719 19:34:56.870332   68019 main.go:141] libmachine: (no-preload-971041) Waiting to get IP...
	I0719 19:34:56.871429   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:56.871920   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:56.871979   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:56.871888   70115 retry.go:31] will retry after 201.799148ms: waiting for machine to come up
	I0719 19:34:57.075216   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:57.075744   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:57.075774   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:57.075692   70115 retry.go:31] will retry after 311.790253ms: waiting for machine to come up
	I0719 19:34:57.389280   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:57.389811   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:57.389836   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:57.389752   70115 retry.go:31] will retry after 434.42849ms: waiting for machine to come up
	I0719 19:34:57.826106   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:57.826558   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:57.826591   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:57.826524   70115 retry.go:31] will retry after 416.820957ms: waiting for machine to come up
	I0719 19:34:55.234692   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:34:55.234729   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:56.690525   68617 node_ready.go:49] node "embed-certs-889918" has status "Ready":"True"
	I0719 19:34:56.690548   68617 node_ready.go:38] duration metric: took 7.506818962s for node "embed-certs-889918" to be "Ready" ...
	I0719 19:34:56.690557   68617 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:34:56.697237   68617 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:56.702754   68617 pod_ready.go:92] pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace has status "Ready":"True"
	I0719 19:34:56.702775   68617 pod_ready.go:81] duration metric: took 5.512702ms for pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:56.702784   68617 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:58.709004   68617 pod_ready.go:102] pod "etcd-embed-certs-889918" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:00.709279   68617 pod_ready.go:92] pod "etcd-embed-certs-889918" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:00.709305   68617 pod_ready.go:81] duration metric: took 4.00651453s for pod "etcd-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.709317   68617 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.715390   68617 pod_ready.go:92] pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:00.715416   68617 pod_ready.go:81] duration metric: took 6.089348ms for pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.715428   68617 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.720718   68617 pod_ready.go:92] pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:00.720754   68617 pod_ready.go:81] duration metric: took 5.316049ms for pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.720768   68617 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5fcgs" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.725631   68617 pod_ready.go:92] pod "kube-proxy-5fcgs" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:00.725660   68617 pod_ready.go:81] duration metric: took 4.884075ms for pod "kube-proxy-5fcgs" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.725671   68617 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:57.028362   68995 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 19:34:57.028462   68995 ssh_runner.go:195] Run: which lz4
	I0719 19:34:57.032617   68995 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 19:34:57.037102   68995 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 19:34:57.037142   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0719 19:34:58.509147   68995 crio.go:462] duration metric: took 1.476562763s to copy over tarball
	I0719 19:34:58.509223   68995 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 19:35:01.576920   68995 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.067667619s)
	I0719 19:35:01.576949   68995 crio.go:469] duration metric: took 3.067772722s to extract the tarball
	I0719 19:35:01.576959   68995 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 19:35:01.622394   68995 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:35:01.658103   68995 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 19:35:01.658135   68995 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 19:35:01.658233   68995 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:01.658257   68995 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:35:01.658262   68995 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:35:01.658236   68995 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0719 19:35:01.658235   68995 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0719 19:35:01.658233   68995 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:35:01.658247   68995 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:35:01.658433   68995 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0719 19:35:01.660336   68995 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0719 19:35:01.660361   68995 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:35:01.660343   68995 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:35:01.660333   68995 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:35:01.660339   68995 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0719 19:35:01.660340   68995 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:35:01.660333   68995 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:01.660714   68995 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0719 19:35:01.895704   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0719 19:35:01.907901   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:35:01.908071   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:35:01.923590   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0719 19:35:01.923831   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:35:01.955327   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:35:01.983003   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0719 19:35:00.235814   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:35:00.235880   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:35:00.893649   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:35:00.893683   68536 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:35:00.893697   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:35:00.916935   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:35:00.916965   68536 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:35:01.233380   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:35:01.242031   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:35:01.242062   68536 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:35:01.732567   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:35:01.737153   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:35:01.737201   68536 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:35:02.232697   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:35:02.238729   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:35:02.238763   68536 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:35:02.733474   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:35:02.738862   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 200:
	ok
	I0719 19:35:02.746708   68536 api_server.go:141] control plane version: v1.30.3
	I0719 19:35:02.746736   68536 api_server.go:131] duration metric: took 38.514241141s to wait for apiserver health ...
	I0719 19:35:02.746748   68536 cni.go:84] Creating CNI manager for ""
	I0719 19:35:02.746757   68536 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:34:58.245957   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:58.246426   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:58.246451   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:58.246364   70115 retry.go:31] will retry after 682.234308ms: waiting for machine to come up
	I0719 19:34:58.930013   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:58.930623   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:58.930642   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:58.930578   70115 retry.go:31] will retry after 798.20186ms: waiting for machine to come up
	I0719 19:34:59.730628   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:59.731191   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:59.731213   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:59.731139   70115 retry.go:31] will retry after 911.96578ms: waiting for machine to come up
	I0719 19:35:00.644805   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:00.645291   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:35:00.645326   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:35:00.645245   70115 retry.go:31] will retry after 1.252978155s: waiting for machine to come up
	I0719 19:35:01.900101   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:01.900824   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:35:01.900848   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:35:01.900727   70115 retry.go:31] will retry after 1.662309997s: waiting for machine to come up
	I0719 19:35:02.966644   68536 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 19:35:03.080565   68536 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 19:35:03.099289   68536 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 19:35:03.123741   68536 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:35:03.243633   68536 system_pods.go:59] 8 kube-system pods found
	I0719 19:35:03.243729   68536 system_pods.go:61] "coredns-7db6d8ff4d-tngnj" [e343f406-a7bc-46e8-986a-a55d3579c8d2] Running
	I0719 19:35:03.243751   68536 system_pods.go:61] "etcd-default-k8s-diff-port-578533" [f800aab8-97bf-4080-b2ec-bac5e070cd14] Running
	I0719 19:35:03.243767   68536 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-578533" [afe884c0-eee2-4c52-b01f-c106afb9a244] Running
	I0719 19:35:03.243795   68536 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-578533" [ecece9ad-d0ed-4cad-b01e-d7254df00dfa] Running
	I0719 19:35:03.243816   68536 system_pods.go:61] "kube-proxy-cdb66" [1daaaaf5-4c7d-4c37-86f7-b52d11621ffa] Running
	I0719 19:35:03.243850   68536 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-578533" [0359836d-2099-43f7-9dae-7c54755818d4] Running
	I0719 19:35:03.243874   68536 system_pods.go:61] "metrics-server-569cc877fc-wf82d" [8ae7709a-082d-4279-9d78-44940b2bdae4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:35:03.243889   68536 system_pods.go:61] "storage-provisioner" [767f30f2-20d0-4340-b70b-0570d42fa007] Running
	I0719 19:35:03.243909   68536 system_pods.go:74] duration metric: took 120.145691ms to wait for pod list to return data ...
	I0719 19:35:03.243942   68536 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:35:02.733443   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:04.734486   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:02.004356   68995 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0719 19:35:02.004410   68995 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0719 19:35:02.004451   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.076774   68995 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0719 19:35:02.076820   68995 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:35:02.076871   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.077195   68995 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0719 19:35:02.077230   68995 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:35:02.077274   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.094076   68995 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0719 19:35:02.094105   68995 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0719 19:35:02.094118   68995 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:35:02.094148   68995 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0719 19:35:02.094169   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.094186   68995 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:35:02.094156   68995 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0719 19:35:02.094233   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.094246   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.100381   68995 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0719 19:35:02.100411   68995 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0719 19:35:02.100416   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0719 19:35:02.100449   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.100545   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:35:02.100547   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:35:02.106653   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:35:02.106700   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0719 19:35:02.106781   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:35:02.214163   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0719 19:35:02.214249   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0719 19:35:02.216256   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0719 19:35:02.216348   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0719 19:35:02.247645   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0719 19:35:02.247723   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0719 19:35:02.247818   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0719 19:35:02.265494   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0719 19:35:02.465885   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:02.607013   68995 cache_images.go:92] duration metric: took 948.858667ms to LoadCachedImages
	W0719 19:35:02.607143   68995 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0719 19:35:02.607167   68995 kubeadm.go:934] updating node { 192.168.39.220 8443 v1.20.0 crio true true} ...
	I0719 19:35:02.607303   68995 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-570369 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 19:35:02.607399   68995 ssh_runner.go:195] Run: crio config
	I0719 19:35:02.661274   68995 cni.go:84] Creating CNI manager for ""
	I0719 19:35:02.661306   68995 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:35:02.661326   68995 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 19:35:02.661353   68995 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.220 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-570369 NodeName:old-k8s-version-570369 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.220"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.220 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0719 19:35:02.661525   68995 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.220
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-570369"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.220
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.220"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 19:35:02.661599   68995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0719 19:35:02.671792   68995 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 19:35:02.671878   68995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 19:35:02.683593   68995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0719 19:35:02.703448   68995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 19:35:02.723494   68995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0719 19:35:02.741880   68995 ssh_runner.go:195] Run: grep 192.168.39.220	control-plane.minikube.internal$ /etc/hosts
	I0719 19:35:02.745968   68995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.220	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:35:02.759015   68995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:35:02.894896   68995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:35:02.912243   68995 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369 for IP: 192.168.39.220
	I0719 19:35:02.912272   68995 certs.go:194] generating shared ca certs ...
	I0719 19:35:02.912295   68995 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:35:02.912468   68995 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 19:35:02.912520   68995 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 19:35:02.912532   68995 certs.go:256] generating profile certs ...
	I0719 19:35:02.912660   68995 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/client.key
	I0719 19:35:02.912732   68995 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.key.41bb110d
	I0719 19:35:02.912784   68995 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/proxy-client.key
	I0719 19:35:02.912919   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 19:35:02.912959   68995 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 19:35:02.912973   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 19:35:02.913006   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 19:35:02.913040   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 19:35:02.913070   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 19:35:02.913125   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:35:02.914001   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 19:35:02.962668   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 19:35:02.997316   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 19:35:03.033868   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 19:35:03.073913   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0719 19:35:03.120607   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 19:35:03.156647   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 19:35:03.196763   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 19:35:03.233170   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 19:35:03.260949   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 19:35:03.286207   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 19:35:03.312158   68995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 19:35:03.333544   68995 ssh_runner.go:195] Run: openssl version
	I0719 19:35:03.339422   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 19:35:03.353775   68995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 19:35:03.359494   68995 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 19:35:03.359597   68995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 19:35:03.365926   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 19:35:03.380164   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 19:35:03.394069   68995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 19:35:03.398358   68995 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 19:35:03.398410   68995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 19:35:03.404158   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 19:35:03.415159   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 19:35:03.425887   68995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:35:03.430584   68995 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:35:03.430641   68995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:35:03.436201   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 19:35:03.448497   68995 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 19:35:03.454026   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 19:35:03.461326   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 19:35:03.467492   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 19:35:03.473959   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 19:35:03.480337   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 19:35:03.487652   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 19:35:03.494941   68995 kubeadm.go:392] StartCluster: {Name:old-k8s-version-570369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-570369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:35:03.495049   68995 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 19:35:03.495095   68995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:35:03.539527   68995 cri.go:89] found id: ""
	I0719 19:35:03.539603   68995 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 19:35:03.550212   68995 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 19:35:03.550232   68995 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 19:35:03.550290   68995 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 19:35:03.562250   68995 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 19:35:03.563624   68995 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-570369" does not appear in /home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:35:03.564665   68995 kubeconfig.go:62] /home/jenkins/minikube-integration/19307-14841/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-570369" cluster setting kubeconfig missing "old-k8s-version-570369" context setting]
	I0719 19:35:03.566027   68995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/kubeconfig: {Name:mk0bbb5f0de0e816a151363ad4427bbf9de52f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:35:03.639264   68995 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 19:35:03.651651   68995 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.220
	I0719 19:35:03.651684   68995 kubeadm.go:1160] stopping kube-system containers ...
	I0719 19:35:03.651698   68995 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 19:35:03.651753   68995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:35:03.694507   68995 cri.go:89] found id: ""
	I0719 19:35:03.694575   68995 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 19:35:03.713478   68995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:35:03.724717   68995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:35:03.724744   68995 kubeadm.go:157] found existing configuration files:
	
	I0719 19:35:03.724803   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:35:03.736032   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:35:03.736099   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:35:03.746972   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:35:03.757246   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:35:03.757384   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:35:03.766980   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:35:03.776428   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:35:03.776493   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:35:03.788238   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:35:03.799163   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:35:03.799230   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:35:03.810607   68995 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:35:03.820069   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:04.035635   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:04.678453   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:04.924967   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:05.021723   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:05.118353   68995 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:35:05.118439   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:05.618699   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:06.119206   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:06.618828   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:03.565550   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:03.566079   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:35:03.566106   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:35:03.566042   70115 retry.go:31] will retry after 1.551544013s: waiting for machine to come up
	I0719 19:35:05.119602   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:05.120126   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:35:05.120147   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:35:05.120073   70115 retry.go:31] will retry after 2.205398899s: waiting for machine to come up
	I0719 19:35:07.327911   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:07.328428   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:35:07.328451   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:35:07.328390   70115 retry.go:31] will retry after 3.255187102s: waiting for machine to come up
	I0719 19:35:03.641957   68536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:35:03.641987   68536 node_conditions.go:123] node cpu capacity is 2
	I0719 19:35:03.642001   68536 node_conditions.go:105] duration metric: took 398.042486ms to run NodePressure ...
	I0719 19:35:03.642020   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:04.658145   68536 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.016096931s)
	I0719 19:35:04.658182   68536 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 19:35:04.664125   68536 retry.go:31] will retry after 336.601067ms: kubelet not initialised
	I0719 19:35:05.007823   68536 retry.go:31] will retry after 366.441912ms: kubelet not initialised
	I0719 19:35:05.379510   68536 retry.go:31] will retry after 463.132181ms: kubelet not initialised
	I0719 19:35:05.848603   68536 retry.go:31] will retry after 1.07509848s: kubelet not initialised
	I0719 19:35:06.931674   68536 kubeadm.go:739] kubelet initialised
	I0719 19:35:06.931699   68536 kubeadm.go:740] duration metric: took 2.273506864s waiting for restarted kubelet to initialise ...
	I0719 19:35:06.931706   68536 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:35:06.941748   68536 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-tngnj" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:07.231890   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:09.732362   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:07.119022   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:07.619075   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:08.119512   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:08.618875   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:09.118565   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:09.619032   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:10.118774   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:10.618802   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:11.119561   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:11.618914   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:10.585455   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:10.585919   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:35:10.585946   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:35:10.585867   70115 retry.go:31] will retry after 3.911827583s: waiting for machine to come up
	I0719 19:35:08.947740   68536 pod_ready.go:102] pod "coredns-7db6d8ff4d-tngnj" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:10.959978   68536 pod_ready.go:102] pod "coredns-7db6d8ff4d-tngnj" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:12.449422   68536 pod_ready.go:92] pod "coredns-7db6d8ff4d-tngnj" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:12.449445   68536 pod_ready.go:81] duration metric: took 5.507672716s for pod "coredns-7db6d8ff4d-tngnj" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:12.449459   68536 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:12.455485   68536 pod_ready.go:92] pod "etcd-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:12.455513   68536 pod_ready.go:81] duration metric: took 6.046494ms for pod "etcd-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:12.455524   68536 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:12.461238   68536 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:12.461259   68536 pod_ready.go:81] duration metric: took 5.727166ms for pod "kube-apiserver-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:12.461268   68536 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:12.230775   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:14.232074   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:16.232335   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:12.119049   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:12.619110   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:13.119295   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:13.619413   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:14.118909   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:14.619324   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:15.118964   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:15.618534   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:16.119209   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:16.619370   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:14.500920   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.501377   68019 main.go:141] libmachine: (no-preload-971041) Found IP for machine: 192.168.50.119
	I0719 19:35:14.501409   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has current primary IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.501420   68019 main.go:141] libmachine: (no-preload-971041) Reserving static IP address...
	I0719 19:35:14.501837   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "no-preload-971041", mac: "52:54:00:72:b8:bd", ip: "192.168.50.119"} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.501869   68019 main.go:141] libmachine: (no-preload-971041) Reserved static IP address: 192.168.50.119
	I0719 19:35:14.501888   68019 main.go:141] libmachine: (no-preload-971041) DBG | skip adding static IP to network mk-no-preload-971041 - found existing host DHCP lease matching {name: "no-preload-971041", mac: "52:54:00:72:b8:bd", ip: "192.168.50.119"}
	I0719 19:35:14.501907   68019 main.go:141] libmachine: (no-preload-971041) DBG | Getting to WaitForSSH function...
	I0719 19:35:14.501923   68019 main.go:141] libmachine: (no-preload-971041) Waiting for SSH to be available...
	I0719 19:35:14.504106   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.504417   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.504451   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.504563   68019 main.go:141] libmachine: (no-preload-971041) DBG | Using SSH client type: external
	I0719 19:35:14.504609   68019 main.go:141] libmachine: (no-preload-971041) DBG | Using SSH private key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa (-rw-------)
	I0719 19:35:14.504650   68019 main.go:141] libmachine: (no-preload-971041) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.119 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 19:35:14.504691   68019 main.go:141] libmachine: (no-preload-971041) DBG | About to run SSH command:
	I0719 19:35:14.504710   68019 main.go:141] libmachine: (no-preload-971041) DBG | exit 0
	I0719 19:35:14.624083   68019 main.go:141] libmachine: (no-preload-971041) DBG | SSH cmd err, output: <nil>: 
	I0719 19:35:14.624404   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetConfigRaw
	I0719 19:35:14.625152   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetIP
	I0719 19:35:14.628074   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.628431   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.628460   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.628756   68019 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/config.json ...
	I0719 19:35:14.628949   68019 machine.go:94] provisionDockerMachine start ...
	I0719 19:35:14.628967   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:35:14.629217   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:14.631862   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.632252   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.632280   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.632505   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:14.632703   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:14.632878   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:14.633052   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:14.633256   68019 main.go:141] libmachine: Using SSH client type: native
	I0719 19:35:14.633484   68019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.119 22 <nil> <nil>}
	I0719 19:35:14.633500   68019 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 19:35:14.732493   68019 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 19:35:14.732533   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetMachineName
	I0719 19:35:14.732918   68019 buildroot.go:166] provisioning hostname "no-preload-971041"
	I0719 19:35:14.732943   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetMachineName
	I0719 19:35:14.733124   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:14.735603   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.735962   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.735998   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.736203   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:14.736432   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:14.736617   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:14.736847   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:14.737046   68019 main.go:141] libmachine: Using SSH client type: native
	I0719 19:35:14.737264   68019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.119 22 <nil> <nil>}
	I0719 19:35:14.737280   68019 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-971041 && echo "no-preload-971041" | sudo tee /etc/hostname
	I0719 19:35:14.850040   68019 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-971041
	
	I0719 19:35:14.850066   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:14.853200   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.853584   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.853618   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.854242   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:14.854505   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:14.854720   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:14.854889   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:14.855089   68019 main.go:141] libmachine: Using SSH client type: native
	I0719 19:35:14.855321   68019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.119 22 <nil> <nil>}
	I0719 19:35:14.855344   68019 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-971041' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-971041/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-971041' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 19:35:14.959910   68019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:35:14.959942   68019 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 19:35:14.959969   68019 buildroot.go:174] setting up certificates
	I0719 19:35:14.959985   68019 provision.go:84] configureAuth start
	I0719 19:35:14.959997   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetMachineName
	I0719 19:35:14.960252   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetIP
	I0719 19:35:14.963262   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.963701   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.963732   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.963997   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:14.966731   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.967155   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.967176   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.967343   68019 provision.go:143] copyHostCerts
	I0719 19:35:14.967418   68019 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 19:35:14.967435   68019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 19:35:14.967506   68019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 19:35:14.967642   68019 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 19:35:14.967657   68019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 19:35:14.967696   68019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 19:35:14.967783   68019 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 19:35:14.967796   68019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 19:35:14.967826   68019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 19:35:14.967924   68019 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.no-preload-971041 san=[127.0.0.1 192.168.50.119 localhost minikube no-preload-971041]
	I0719 19:35:15.077149   68019 provision.go:177] copyRemoteCerts
	I0719 19:35:15.077207   68019 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 19:35:15.077229   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:15.079901   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.080234   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.080265   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.080489   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:15.080694   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.080897   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:15.081069   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:35:15.161684   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 19:35:15.186551   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0719 19:35:15.208816   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 19:35:15.233238   68019 provision.go:87] duration metric: took 273.242567ms to configureAuth
	I0719 19:35:15.233264   68019 buildroot.go:189] setting minikube options for container-runtime
	I0719 19:35:15.233500   68019 config.go:182] Loaded profile config "no-preload-971041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 19:35:15.233593   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:15.236166   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.236575   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.236621   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.236816   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:15.237047   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.237204   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.237393   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:15.237538   68019 main.go:141] libmachine: Using SSH client type: native
	I0719 19:35:15.237727   68019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.119 22 <nil> <nil>}
	I0719 19:35:15.237752   68019 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 19:35:15.489927   68019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 19:35:15.489955   68019 machine.go:97] duration metric: took 860.992755ms to provisionDockerMachine
	I0719 19:35:15.489968   68019 start.go:293] postStartSetup for "no-preload-971041" (driver="kvm2")
	I0719 19:35:15.489983   68019 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 19:35:15.490011   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:35:15.490360   68019 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 19:35:15.490390   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:15.493498   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.493895   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.493921   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.494033   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:15.494190   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.494355   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:15.494522   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:35:15.575024   68019 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 19:35:15.579078   68019 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 19:35:15.579102   68019 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 19:35:15.579180   68019 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 19:35:15.579269   68019 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 19:35:15.579483   68019 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 19:35:15.589240   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:35:15.612071   68019 start.go:296] duration metric: took 122.087671ms for postStartSetup
	I0719 19:35:15.612115   68019 fix.go:56] duration metric: took 20.143554276s for fixHost
	I0719 19:35:15.612139   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:15.614679   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.614969   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.614994   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.615141   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:15.615362   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.615517   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.615645   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:15.615872   68019 main.go:141] libmachine: Using SSH client type: native
	I0719 19:35:15.616088   68019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.119 22 <nil> <nil>}
	I0719 19:35:15.616103   68019 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 19:35:15.712393   68019 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721417715.673203772
	
	I0719 19:35:15.712417   68019 fix.go:216] guest clock: 1721417715.673203772
	I0719 19:35:15.712430   68019 fix.go:229] Guest: 2024-07-19 19:35:15.673203772 +0000 UTC Remote: 2024-07-19 19:35:15.612120403 +0000 UTC m=+357.807790755 (delta=61.083369ms)
	I0719 19:35:15.712454   68019 fix.go:200] guest clock delta is within tolerance: 61.083369ms
	I0719 19:35:15.712468   68019 start.go:83] releasing machines lock for "no-preload-971041", held for 20.243943716s
	I0719 19:35:15.712492   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:35:15.712798   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetIP
	I0719 19:35:15.715651   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.716143   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.716167   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.716435   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:35:15.717039   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:35:15.717281   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:35:15.717381   68019 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 19:35:15.717428   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:15.717554   68019 ssh_runner.go:195] Run: cat /version.json
	I0719 19:35:15.717588   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:15.720120   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.720461   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.720489   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.720510   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.720731   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:15.720860   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.720891   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.720912   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.721167   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:15.721179   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:15.721365   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:35:15.721419   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.721559   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:15.721711   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:35:15.834816   68019 ssh_runner.go:195] Run: systemctl --version
	I0719 19:35:15.841452   68019 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 19:35:15.982250   68019 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 19:35:15.989060   68019 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 19:35:15.989135   68019 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 19:35:16.003901   68019 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 19:35:16.003926   68019 start.go:495] detecting cgroup driver to use...
	I0719 19:35:16.003995   68019 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 19:35:16.023680   68019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 19:35:16.038740   68019 docker.go:217] disabling cri-docker service (if available) ...
	I0719 19:35:16.038818   68019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 19:35:16.053907   68019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 19:35:16.067484   68019 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 19:35:16.186006   68019 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 19:35:16.339983   68019 docker.go:233] disabling docker service ...
	I0719 19:35:16.340044   68019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 19:35:16.353948   68019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 19:35:16.367479   68019 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 19:35:16.513192   68019 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 19:35:16.642277   68019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 19:35:16.656031   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 19:35:16.674529   68019 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0719 19:35:16.674606   68019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.684907   68019 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 19:35:16.684974   68019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.695542   68019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.706406   68019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.716477   68019 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 19:35:16.726989   68019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.736973   68019 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.753617   68019 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.764022   68019 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 19:35:16.772871   68019 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 19:35:16.772919   68019 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 19:35:16.785870   68019 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 19:35:16.794887   68019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:35:16.942160   68019 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 19:35:17.084983   68019 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 19:35:17.085048   68019 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 19:35:17.089799   68019 start.go:563] Will wait 60s for crictl version
	I0719 19:35:17.089866   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.093813   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 19:35:17.130312   68019 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 19:35:17.130390   68019 ssh_runner.go:195] Run: crio --version
	I0719 19:35:17.157666   68019 ssh_runner.go:195] Run: crio --version
	I0719 19:35:17.186672   68019 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0719 19:35:17.188084   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetIP
	I0719 19:35:17.190975   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:17.191340   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:17.191369   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:17.191561   68019 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0719 19:35:17.195580   68019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:35:17.207427   68019 kubeadm.go:883] updating cluster {Name:no-preload-971041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-971041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.119 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 19:35:17.207577   68019 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0719 19:35:17.207631   68019 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:35:17.243262   68019 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0719 19:35:17.243288   68019 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 19:35:17.243386   68019 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:17.243412   68019 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 19:35:17.243417   68019 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0719 19:35:17.243423   68019 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 19:35:17.243448   68019 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 19:35:17.243394   68019 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 19:35:17.243395   68019 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0719 19:35:17.243493   68019 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 19:35:17.245104   68019 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0719 19:35:17.245131   68019 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0719 19:35:17.245106   68019 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 19:35:17.245116   68019 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:17.245179   68019 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 19:35:17.245186   68019 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 19:35:17.245375   68019 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 19:35:17.245546   68019 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 19:35:17.462701   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 19:35:17.480663   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0719 19:35:17.488510   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 19:35:17.491593   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 19:35:17.501505   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0719 19:35:17.502452   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 19:35:17.506436   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0719 19:35:17.522704   68019 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0719 19:35:17.522760   68019 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 19:35:17.522811   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.659364   68019 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0719 19:35:17.659399   68019 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 19:35:17.659429   68019 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0719 19:35:17.659451   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.659462   68019 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 19:35:17.659479   68019 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0719 19:35:17.659515   68019 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 19:35:17.659519   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.659550   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.659591   68019 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0719 19:35:17.659549   68019 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0719 19:35:17.659619   68019 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0719 19:35:17.659627   68019 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 19:35:17.659631   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 19:35:17.659647   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.659658   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.670058   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0719 19:35:17.670089   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0719 19:35:17.671989   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 19:35:17.672178   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 19:35:17.672292   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 19:35:17.738069   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0719 19:35:17.738169   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0719 19:35:17.789982   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0719 19:35:17.790070   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0719 19:35:17.790088   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0719 19:35:17.790163   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0719 19:35:17.790104   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0719 19:35:17.790187   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0719 19:35:17.791690   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0719 19:35:17.791731   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0719 19:35:17.791748   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0719 19:35:17.791761   68019 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0719 19:35:17.791777   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0719 19:35:17.791798   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0719 19:35:17.791810   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0719 19:35:17.799893   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0719 19:35:17.800102   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0719 19:35:17.800410   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0719 19:35:14.467021   68536 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:16.467879   68536 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:18.234926   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:20.733492   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:17.119366   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:17.619359   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:18.118510   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:18.618899   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:19.119102   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:19.619423   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:20.119089   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:20.618533   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:21.118809   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:21.618616   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:18.089182   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:19.785577   68019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.993744788s)
	I0719 19:35:19.785607   68019 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.99381007s)
	I0719 19:35:19.785615   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0719 19:35:19.785628   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0719 19:35:19.785634   68019 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0719 19:35:19.785654   68019 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.993829277s)
	I0719 19:35:19.785664   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0719 19:35:19.785684   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0719 19:35:19.785741   68019 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.696527431s)
	I0719 19:35:19.785825   68019 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0719 19:35:19.785866   68019 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:19.785910   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:19.790112   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:22.178743   68019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.393030453s)
	I0719 19:35:22.178773   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0719 19:35:22.178783   68019 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0719 19:35:22.178800   68019 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.388665972s)
	I0719 19:35:22.178830   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0719 19:35:22.178848   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0719 19:35:22.178920   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0719 19:35:22.183029   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0719 19:35:18.968128   68536 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:21.467026   68536 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:21.467056   68536 pod_ready.go:81] duration metric: took 9.005780978s for pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:21.467069   68536 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cdb66" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:21.472259   68536 pod_ready.go:92] pod "kube-proxy-cdb66" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:21.472282   68536 pod_ready.go:81] duration metric: took 5.205132ms for pod "kube-proxy-cdb66" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:21.472299   68536 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:21.476400   68536 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:21.476420   68536 pod_ready.go:81] duration metric: took 4.112093ms for pod "kube-scheduler-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:21.476431   68536 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:22.733565   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:25.232147   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:22.119363   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:22.618892   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:23.118639   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:23.619386   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:24.118543   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:24.618558   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:25.118612   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:25.619231   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:26.118601   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:26.618755   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:25.462621   68019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.283764696s)
	I0719 19:35:25.462658   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0719 19:35:25.462676   68019 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0719 19:35:25.462728   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0719 19:35:26.824398   68019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.361644989s)
	I0719 19:35:26.824432   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0719 19:35:26.824443   68019 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0719 19:35:26.824492   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0719 19:35:23.484653   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:25.984229   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:27.233184   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:29.732740   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:27.118683   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:27.618797   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:28.118832   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:28.618508   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:29.118591   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:29.618542   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:30.118979   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:30.619506   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:31.118721   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:31.619394   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:28.791678   68019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.967160098s)
	I0719 19:35:28.791704   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0719 19:35:28.791716   68019 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0719 19:35:28.791769   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0719 19:35:30.745676   68019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.95388111s)
	I0719 19:35:30.745709   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0719 19:35:30.745721   68019 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0719 19:35:30.745769   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0719 19:35:31.391036   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0719 19:35:31.391080   68019 cache_images.go:123] Successfully loaded all cached images
	I0719 19:35:31.391085   68019 cache_images.go:92] duration metric: took 14.147757536s to LoadCachedImages
	I0719 19:35:31.391098   68019 kubeadm.go:934] updating node { 192.168.50.119 8443 v1.31.0-beta.0 crio true true} ...
	I0719 19:35:31.391207   68019 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-971041 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.119
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-971041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 19:35:31.391282   68019 ssh_runner.go:195] Run: crio config
	I0719 19:35:31.434562   68019 cni.go:84] Creating CNI manager for ""
	I0719 19:35:31.434584   68019 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:35:31.434594   68019 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 19:35:31.434617   68019 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.119 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-971041 NodeName:no-preload-971041 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.119"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.119 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 19:35:31.434745   68019 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.119
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-971041"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.119
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.119"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 19:35:31.434803   68019 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0719 19:35:31.446259   68019 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 19:35:31.446337   68019 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 19:35:31.456298   68019 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0719 19:35:31.473270   68019 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0719 19:35:31.490353   68019 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0719 19:35:31.506996   68019 ssh_runner.go:195] Run: grep 192.168.50.119	control-plane.minikube.internal$ /etc/hosts
	I0719 19:35:31.510538   68019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.119	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:35:31.522076   68019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:35:31.638008   68019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:35:31.654421   68019 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041 for IP: 192.168.50.119
	I0719 19:35:31.654443   68019 certs.go:194] generating shared ca certs ...
	I0719 19:35:31.654458   68019 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:35:31.654637   68019 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 19:35:31.654703   68019 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 19:35:31.654732   68019 certs.go:256] generating profile certs ...
	I0719 19:35:31.654844   68019 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/client.key
	I0719 19:35:31.654929   68019 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/apiserver.key.77bb0cd9
	I0719 19:35:31.654978   68019 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/proxy-client.key
	I0719 19:35:31.655112   68019 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 19:35:31.655148   68019 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 19:35:31.655161   68019 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 19:35:31.655204   68019 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 19:35:31.655234   68019 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 19:35:31.655274   68019 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 19:35:31.655325   68019 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:35:31.656172   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 19:35:31.683885   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 19:35:31.714681   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 19:35:31.742470   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 19:35:31.770348   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0719 19:35:31.804982   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 19:35:31.830750   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 19:35:31.857826   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 19:35:31.880913   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 19:35:31.903475   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 19:35:31.925060   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 19:35:31.948414   68019 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 19:35:31.964203   68019 ssh_runner.go:195] Run: openssl version
	I0719 19:35:31.969774   68019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 19:35:31.981478   68019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 19:35:31.986937   68019 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 19:35:31.987002   68019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 19:35:31.993471   68019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 19:35:32.004689   68019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 19:35:32.015199   68019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 19:35:32.019475   68019 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 19:35:32.019514   68019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 19:35:32.024962   68019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 19:35:32.034782   68019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 19:35:32.044867   68019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:35:32.048801   68019 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:35:32.048849   68019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:35:32.054461   68019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 19:35:32.064884   68019 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 19:35:32.069053   68019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 19:35:32.074547   68019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 19:35:32.080411   68019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 19:35:32.085765   68019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 19:35:32.090933   68019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 19:35:32.096183   68019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 19:35:32.101377   68019 kubeadm.go:392] StartCluster: {Name:no-preload-971041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-971041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.119 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:35:32.101485   68019 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 19:35:32.101523   68019 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:35:32.139740   68019 cri.go:89] found id: ""
	I0719 19:35:32.139796   68019 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 19:35:32.149576   68019 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 19:35:32.149595   68019 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 19:35:32.149640   68019 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 19:35:32.159486   68019 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 19:35:32.160857   68019 kubeconfig.go:125] found "no-preload-971041" server: "https://192.168.50.119:8443"
	I0719 19:35:32.163357   68019 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 19:35:32.173671   68019 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.119
	I0719 19:35:32.173710   68019 kubeadm.go:1160] stopping kube-system containers ...
	I0719 19:35:32.173723   68019 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 19:35:32.173767   68019 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:35:32.207095   68019 cri.go:89] found id: ""
	I0719 19:35:32.207172   68019 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 19:35:32.224068   68019 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:35:32.233552   68019 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:35:32.233571   68019 kubeadm.go:157] found existing configuration files:
	
	I0719 19:35:32.233623   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:35:32.241920   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:35:32.241971   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:35:32.250594   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:35:32.258910   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:35:32.258961   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:35:32.273812   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:35:32.282220   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:35:32.282277   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:35:32.290961   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:35:32.299396   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:35:32.299498   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:35:32.308343   68019 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:35:32.317592   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:32.421587   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:28.482817   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:30.982904   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:32.983517   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:32.231730   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:34.232127   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:32.118495   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:32.618922   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:33.119055   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:33.619189   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:34.118644   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:34.618987   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:35.119225   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:35.619447   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:36.118778   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:36.619327   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:33.447745   68019 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.026128232s)
	I0719 19:35:33.447770   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:33.662089   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:33.725683   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:33.792637   68019 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:35:33.792728   68019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:34.292798   68019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:34.793080   68019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:34.807168   68019 api_server.go:72] duration metric: took 1.014530984s to wait for apiserver process to appear ...
	I0719 19:35:34.807196   68019 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:35:34.807224   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:35:34.807771   68019 api_server.go:269] stopped: https://192.168.50.119:8443/healthz: Get "https://192.168.50.119:8443/healthz": dial tcp 192.168.50.119:8443: connect: connection refused
	I0719 19:35:35.308340   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:35:37.476017   68019 api_server.go:279] https://192.168.50.119:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:35:37.476054   68019 api_server.go:103] status: https://192.168.50.119:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:35:37.476070   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:35:37.518367   68019 api_server.go:279] https://192.168.50.119:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:35:37.518394   68019 api_server.go:103] status: https://192.168.50.119:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:35:37.807881   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:35:37.817233   68019 api_server.go:279] https://192.168.50.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:35:37.817256   68019 api_server.go:103] status: https://192.168.50.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:35:34.984368   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:37.482412   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:38.307564   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:35:38.312884   68019 api_server.go:279] https://192.168.50.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:35:38.312920   68019 api_server.go:103] status: https://192.168.50.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:35:38.807444   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:35:38.811634   68019 api_server.go:279] https://192.168.50.119:8443/healthz returned 200:
	ok
	I0719 19:35:38.818446   68019 api_server.go:141] control plane version: v1.31.0-beta.0
	I0719 19:35:38.818473   68019 api_server.go:131] duration metric: took 4.011270764s to wait for apiserver health ...
	I0719 19:35:38.818484   68019 cni.go:84] Creating CNI manager for ""
	I0719 19:35:38.818493   68019 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:35:38.820420   68019 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 19:35:36.733194   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:38.734341   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:41.232296   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:37.118504   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:37.619038   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:38.118793   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:38.619025   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:39.118668   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:39.619156   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:40.118595   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:40.618638   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:41.119418   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:41.618806   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:38.821795   68019 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 19:35:38.833825   68019 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 19:35:38.855799   68019 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:35:38.866571   68019 system_pods.go:59] 8 kube-system pods found
	I0719 19:35:38.866614   68019 system_pods.go:61] "coredns-5cfdc65f69-dzgdb" [1aca7daa-f2b9-41f9-ab4f-f6dcdf5a5655] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 19:35:38.866623   68019 system_pods.go:61] "etcd-no-preload-971041" [a2001218-3485-47a4-b7ba-bc5c41227b84] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 19:35:38.866632   68019 system_pods.go:61] "kube-apiserver-no-preload-971041" [f1a82c83-f8fc-44da-ba9c-558422a050cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 19:35:38.866641   68019 system_pods.go:61] "kube-controller-manager-no-preload-971041" [d760c600-3d53-436f-80b1-43f97c8a1d97] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 19:35:38.866649   68019 system_pods.go:61] "kube-proxy-xnwc9" [f709e007-bc70-4aed-a66c-c7a8218df4c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0719 19:35:38.866661   68019 system_pods.go:61] "kube-scheduler-no-preload-971041" [0751f461-931d-4ae9-bba2-393487903f77] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0719 19:35:38.866671   68019 system_pods.go:61] "metrics-server-78fcd8795b-j78vm" [dd7ef76a-38b2-4660-88ce-a4dbe6bdd43e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:35:38.866683   68019 system_pods.go:61] "storage-provisioner" [68333fba-f647-4ddc-81a1-c514426efb1c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 19:35:38.866690   68019 system_pods.go:74] duration metric: took 10.871899ms to wait for pod list to return data ...
	I0719 19:35:38.866700   68019 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:35:38.871059   68019 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:35:38.871085   68019 node_conditions.go:123] node cpu capacity is 2
	I0719 19:35:38.871096   68019 node_conditions.go:105] duration metric: took 4.388431ms to run NodePressure ...
	I0719 19:35:38.871117   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:39.202189   68019 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 19:35:39.208833   68019 kubeadm.go:739] kubelet initialised
	I0719 19:35:39.208852   68019 kubeadm.go:740] duration metric: took 6.63993ms waiting for restarted kubelet to initialise ...
	I0719 19:35:39.208860   68019 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:35:39.213105   68019 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-dzgdb" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:41.221061   68019 pod_ready.go:102] pod "coredns-5cfdc65f69-dzgdb" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:39.482629   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:41.484400   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:43.732393   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:46.230839   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:42.118791   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:42.619397   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:43.119214   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:43.619520   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:44.118776   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:44.618539   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:45.118556   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:45.619338   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:46.119336   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:46.618522   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:43.221724   68019 pod_ready.go:102] pod "coredns-5cfdc65f69-dzgdb" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:44.720005   68019 pod_ready.go:92] pod "coredns-5cfdc65f69-dzgdb" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:44.720027   68019 pod_ready.go:81] duration metric: took 5.506899416s for pod "coredns-5cfdc65f69-dzgdb" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:44.720036   68019 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:46.725895   68019 pod_ready.go:102] pod "etcd-no-preload-971041" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:43.983502   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:46.482945   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:48.232409   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:50.232711   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:47.119180   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:47.618740   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:48.118755   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:48.618715   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:49.118497   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:49.619062   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:50.118758   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:50.618820   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:51.119238   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:51.618777   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:48.726634   68019 pod_ready.go:102] pod "etcd-no-preload-971041" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:50.726838   68019 pod_ready.go:102] pod "etcd-no-preload-971041" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:51.727136   68019 pod_ready.go:92] pod "etcd-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:51.727157   68019 pod_ready.go:81] duration metric: took 7.007114402s for pod "etcd-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.727166   68019 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.731799   68019 pod_ready.go:92] pod "kube-apiserver-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:51.731826   68019 pod_ready.go:81] duration metric: took 4.653065ms for pod "kube-apiserver-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.731868   68019 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.736589   68019 pod_ready.go:92] pod "kube-controller-manager-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:51.736609   68019 pod_ready.go:81] duration metric: took 4.730944ms for pod "kube-controller-manager-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.736622   68019 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xnwc9" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.740944   68019 pod_ready.go:92] pod "kube-proxy-xnwc9" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:51.740969   68019 pod_ready.go:81] duration metric: took 4.339031ms for pod "kube-proxy-xnwc9" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.740990   68019 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.744978   68019 pod_ready.go:92] pod "kube-scheduler-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:51.744996   68019 pod_ready.go:81] duration metric: took 3.998956ms for pod "kube-scheduler-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.745004   68019 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:48.983216   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:50.983267   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:52.234034   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:54.234538   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:52.119537   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:52.618848   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:53.118706   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:53.618985   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:54.119456   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:54.618507   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:55.118511   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:55.618721   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:56.118586   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:56.618540   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:53.751346   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:56.252649   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:53.482767   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:55.982603   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:56.732054   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:59.231910   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:57.118852   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:57.619296   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:58.118571   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:58.619133   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:59.119529   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:59.618552   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:00.118563   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:00.618818   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:01.118545   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:01.619292   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:58.751344   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:00.751687   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:58.482346   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:00.983025   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:02.983499   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:01.731466   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:03.732078   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:06.232464   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:02.118552   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:02.618759   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:03.119280   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:03.619453   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:04.119391   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:04.619016   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:05.118566   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:05.118653   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:05.157126   68995 cri.go:89] found id: ""
	I0719 19:36:05.157154   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.157163   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:05.157169   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:05.157218   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:05.190115   68995 cri.go:89] found id: ""
	I0719 19:36:05.190143   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.190153   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:05.190161   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:05.190218   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:05.222515   68995 cri.go:89] found id: ""
	I0719 19:36:05.222542   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.222550   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:05.222559   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:05.222609   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:05.264077   68995 cri.go:89] found id: ""
	I0719 19:36:05.264096   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.264103   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:05.264108   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:05.264180   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:05.305227   68995 cri.go:89] found id: ""
	I0719 19:36:05.305260   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.305271   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:05.305278   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:05.305365   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:05.346841   68995 cri.go:89] found id: ""
	I0719 19:36:05.346870   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.346883   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:05.346893   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:05.346948   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:05.386787   68995 cri.go:89] found id: ""
	I0719 19:36:05.386816   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.386826   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:05.386832   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:05.386888   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:05.425126   68995 cri.go:89] found id: ""
	I0719 19:36:05.425151   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.425161   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:05.425172   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:05.425187   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:05.476260   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:05.476291   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:05.489614   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:05.489639   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:05.616060   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:05.616082   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:05.616095   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:05.680089   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:05.680121   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:03.251437   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:05.253289   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:07.751972   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:05.483082   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:07.983293   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:08.233421   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:10.731139   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:08.219734   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:08.234371   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:08.234437   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:08.279970   68995 cri.go:89] found id: ""
	I0719 19:36:08.279998   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.280009   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:08.280016   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:08.280068   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:08.315997   68995 cri.go:89] found id: ""
	I0719 19:36:08.316026   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.316037   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:08.316044   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:08.316108   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:08.350044   68995 cri.go:89] found id: ""
	I0719 19:36:08.350073   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.350082   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:08.350089   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:08.350149   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:08.407460   68995 cri.go:89] found id: ""
	I0719 19:36:08.407482   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.407494   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:08.407501   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:08.407559   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:08.442445   68995 cri.go:89] found id: ""
	I0719 19:36:08.442476   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.442486   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:08.442493   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:08.442564   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:08.478149   68995 cri.go:89] found id: ""
	I0719 19:36:08.478178   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.478193   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:08.478200   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:08.478261   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:08.523968   68995 cri.go:89] found id: ""
	I0719 19:36:08.523996   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.524003   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:08.524009   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:08.524083   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:08.558285   68995 cri.go:89] found id: ""
	I0719 19:36:08.558312   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.558322   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:08.558332   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:08.558347   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:08.570518   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:08.570544   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:08.642837   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:08.642861   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:08.642874   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:08.716865   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:08.716905   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:08.760499   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:08.760530   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:11.312231   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:11.326565   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:11.326640   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:11.359787   68995 cri.go:89] found id: ""
	I0719 19:36:11.359813   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.359823   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:11.359829   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:11.359905   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:11.392541   68995 cri.go:89] found id: ""
	I0719 19:36:11.392565   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.392573   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:11.392578   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:11.392626   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:11.426313   68995 cri.go:89] found id: ""
	I0719 19:36:11.426340   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.426357   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:11.426363   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:11.426426   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:11.463138   68995 cri.go:89] found id: ""
	I0719 19:36:11.463164   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.463174   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:11.463181   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:11.463242   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:11.496506   68995 cri.go:89] found id: ""
	I0719 19:36:11.496531   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.496542   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:11.496549   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:11.496610   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:11.533917   68995 cri.go:89] found id: ""
	I0719 19:36:11.533947   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.533960   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:11.533969   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:11.534048   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:11.567245   68995 cri.go:89] found id: ""
	I0719 19:36:11.567268   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.567277   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:11.567283   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:11.567332   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:11.609340   68995 cri.go:89] found id: ""
	I0719 19:36:11.609365   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.609376   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:11.609386   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:11.609400   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:11.659918   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:11.659953   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:11.673338   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:11.673371   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:11.745829   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:11.745854   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:11.745868   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:11.816860   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:11.816897   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:10.251374   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:12.252231   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:09.983472   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:12.482718   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:12.732496   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:15.231796   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:14.357653   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:14.371820   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:14.371900   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:14.408447   68995 cri.go:89] found id: ""
	I0719 19:36:14.408478   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.408488   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:14.408512   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:14.408583   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:14.443253   68995 cri.go:89] found id: ""
	I0719 19:36:14.443283   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.443319   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:14.443329   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:14.443403   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:14.476705   68995 cri.go:89] found id: ""
	I0719 19:36:14.476730   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.476739   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:14.476746   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:14.476806   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:14.508251   68995 cri.go:89] found id: ""
	I0719 19:36:14.508276   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.508286   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:14.508294   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:14.508365   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:14.540822   68995 cri.go:89] found id: ""
	I0719 19:36:14.540850   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.540860   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:14.540868   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:14.540930   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:14.574264   68995 cri.go:89] found id: ""
	I0719 19:36:14.574294   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.574320   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:14.574329   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:14.574409   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:14.606520   68995 cri.go:89] found id: ""
	I0719 19:36:14.606553   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.606565   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:14.606573   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:14.606646   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:14.640693   68995 cri.go:89] found id: ""
	I0719 19:36:14.640722   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.640731   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:14.640743   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:14.640757   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:14.697534   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:14.697575   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:14.710323   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:14.710365   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:14.781478   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:14.781501   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:14.781516   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:14.887498   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:14.887535   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:14.750464   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:16.751420   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:14.483133   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:16.982943   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:17.732205   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:19.733051   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:17.427726   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:17.441950   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:17.442041   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:17.475370   68995 cri.go:89] found id: ""
	I0719 19:36:17.475398   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.475408   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:17.475415   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:17.475474   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:17.508258   68995 cri.go:89] found id: ""
	I0719 19:36:17.508289   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.508307   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:17.508315   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:17.508380   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:17.542625   68995 cri.go:89] found id: ""
	I0719 19:36:17.542656   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.542668   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:17.542676   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:17.542750   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:17.583168   68995 cri.go:89] found id: ""
	I0719 19:36:17.583199   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.583210   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:17.583218   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:17.583280   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:17.619339   68995 cri.go:89] found id: ""
	I0719 19:36:17.619369   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.619380   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:17.619386   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:17.619438   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:17.653194   68995 cri.go:89] found id: ""
	I0719 19:36:17.653219   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.653227   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:17.653233   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:17.653289   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:17.689086   68995 cri.go:89] found id: ""
	I0719 19:36:17.689115   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.689123   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:17.689129   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:17.689245   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:17.723079   68995 cri.go:89] found id: ""
	I0719 19:36:17.723108   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.723115   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:17.723124   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:17.723148   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:17.736805   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:17.736829   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:17.808197   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:17.808219   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:17.808251   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:17.893127   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:17.893173   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:17.936617   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:17.936653   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:20.489347   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:20.502806   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:20.502884   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:20.543791   68995 cri.go:89] found id: ""
	I0719 19:36:20.543822   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.543846   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:20.543855   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:20.543924   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:20.582638   68995 cri.go:89] found id: ""
	I0719 19:36:20.582666   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.582678   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:20.582685   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:20.582742   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:20.632150   68995 cri.go:89] found id: ""
	I0719 19:36:20.632179   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.632192   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:20.632200   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:20.632265   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:20.665934   68995 cri.go:89] found id: ""
	I0719 19:36:20.665963   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.665974   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:20.665981   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:20.666040   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:20.703637   68995 cri.go:89] found id: ""
	I0719 19:36:20.703668   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.703679   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:20.703686   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:20.703755   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:20.736467   68995 cri.go:89] found id: ""
	I0719 19:36:20.736496   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.736506   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:20.736514   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:20.736582   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:20.771313   68995 cri.go:89] found id: ""
	I0719 19:36:20.771337   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.771345   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:20.771351   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:20.771408   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:20.804915   68995 cri.go:89] found id: ""
	I0719 19:36:20.804938   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.804944   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:20.804953   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:20.804964   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:20.880189   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:20.880229   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:20.919653   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:20.919682   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:20.968588   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:20.968626   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:20.983781   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:20.983810   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:21.050261   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:18.752612   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:21.251202   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:19.483771   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:21.983390   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:22.232680   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:24.731208   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:23.550679   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:23.563483   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:23.563541   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:23.603919   68995 cri.go:89] found id: ""
	I0719 19:36:23.603950   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.603961   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:23.603969   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:23.604034   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:23.640354   68995 cri.go:89] found id: ""
	I0719 19:36:23.640377   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.640384   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:23.640390   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:23.640441   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:23.672609   68995 cri.go:89] found id: ""
	I0719 19:36:23.672644   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.672661   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:23.672668   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:23.672732   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:23.709547   68995 cri.go:89] found id: ""
	I0719 19:36:23.709577   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.709591   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:23.709598   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:23.709658   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:23.742474   68995 cri.go:89] found id: ""
	I0719 19:36:23.742498   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.742508   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:23.742514   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:23.742578   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:23.779339   68995 cri.go:89] found id: ""
	I0719 19:36:23.779372   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.779385   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:23.779393   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:23.779462   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:23.811984   68995 cri.go:89] found id: ""
	I0719 19:36:23.812010   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.812022   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:23.812031   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:23.812097   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:23.844692   68995 cri.go:89] found id: ""
	I0719 19:36:23.844716   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.844724   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:23.844733   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:23.844746   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:23.899688   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:23.899721   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:23.915771   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:23.915799   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:23.986509   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:23.986529   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:23.986543   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:24.065621   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:24.065653   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:26.602809   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:26.615541   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:26.615616   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:26.647648   68995 cri.go:89] found id: ""
	I0719 19:36:26.647671   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.647680   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:26.647685   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:26.647742   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:26.682383   68995 cri.go:89] found id: ""
	I0719 19:36:26.682409   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.682417   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:26.682423   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:26.682468   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:26.715374   68995 cri.go:89] found id: ""
	I0719 19:36:26.715398   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.715405   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:26.715411   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:26.715482   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:26.748636   68995 cri.go:89] found id: ""
	I0719 19:36:26.748663   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.748673   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:26.748680   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:26.748742   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:26.783186   68995 cri.go:89] found id: ""
	I0719 19:36:26.783218   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.783229   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:26.783236   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:26.783299   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:26.813344   68995 cri.go:89] found id: ""
	I0719 19:36:26.813370   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.813380   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:26.813386   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:26.813434   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:26.845054   68995 cri.go:89] found id: ""
	I0719 19:36:26.845083   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.845093   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:26.845100   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:26.845166   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:26.878589   68995 cri.go:89] found id: ""
	I0719 19:36:26.878614   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.878621   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:26.878629   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:26.878640   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:26.931251   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:26.931284   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:26.944193   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:26.944230   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 19:36:23.252710   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:25.257011   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:27.751677   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:24.482571   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:26.982654   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:26.732883   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:29.233481   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	W0719 19:36:27.011803   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:27.011826   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:27.011846   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:27.085331   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:27.085364   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:29.625633   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:29.639941   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:29.640002   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:29.673642   68995 cri.go:89] found id: ""
	I0719 19:36:29.673669   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.673678   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:29.673683   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:29.673744   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:29.710482   68995 cri.go:89] found id: ""
	I0719 19:36:29.710507   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.710515   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:29.710521   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:29.710571   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:29.746719   68995 cri.go:89] found id: ""
	I0719 19:36:29.746744   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.746754   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:29.746761   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:29.746823   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:29.783230   68995 cri.go:89] found id: ""
	I0719 19:36:29.783256   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.783264   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:29.783269   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:29.783318   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:29.817944   68995 cri.go:89] found id: ""
	I0719 19:36:29.817971   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.817980   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:29.817986   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:29.818049   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:29.851277   68995 cri.go:89] found id: ""
	I0719 19:36:29.851305   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.851315   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:29.851328   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:29.851387   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:29.884476   68995 cri.go:89] found id: ""
	I0719 19:36:29.884502   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.884510   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:29.884516   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:29.884566   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:29.916564   68995 cri.go:89] found id: ""
	I0719 19:36:29.916592   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.916600   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:29.916608   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:29.916620   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:29.967970   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:29.968009   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:29.981767   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:29.981792   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:30.046914   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:30.046939   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:30.046953   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:30.125610   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:30.125644   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:29.753033   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:32.251239   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:29.483807   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:31.982677   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:31.732069   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:34.231495   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:36.232914   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:32.661765   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:32.676274   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:32.676354   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:32.712439   68995 cri.go:89] found id: ""
	I0719 19:36:32.712460   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.712468   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:32.712474   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:32.712534   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:32.745933   68995 cri.go:89] found id: ""
	I0719 19:36:32.745956   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.745965   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:32.745972   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:32.746035   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:32.778900   68995 cri.go:89] found id: ""
	I0719 19:36:32.778920   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.778927   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:32.778932   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:32.778988   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:32.809927   68995 cri.go:89] found id: ""
	I0719 19:36:32.809956   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.809966   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:32.809979   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:32.810038   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:32.840388   68995 cri.go:89] found id: ""
	I0719 19:36:32.840409   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.840417   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:32.840422   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:32.840472   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:32.873390   68995 cri.go:89] found id: ""
	I0719 19:36:32.873419   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.873428   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:32.873436   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:32.873499   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:32.909278   68995 cri.go:89] found id: ""
	I0719 19:36:32.909304   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.909311   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:32.909317   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:32.909374   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:32.941310   68995 cri.go:89] found id: ""
	I0719 19:36:32.941339   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.941346   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:32.941358   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:32.941370   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:32.992848   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:32.992875   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:33.006272   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:33.006305   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:33.068323   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:33.068358   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:33.068372   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:33.148030   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:33.148065   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:35.684957   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:35.697753   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:35.697823   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:35.733470   68995 cri.go:89] found id: ""
	I0719 19:36:35.733492   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.733500   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:35.733508   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:35.733565   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:35.766009   68995 cri.go:89] found id: ""
	I0719 19:36:35.766029   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.766037   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:35.766047   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:35.766104   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:35.799205   68995 cri.go:89] found id: ""
	I0719 19:36:35.799228   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.799238   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:35.799245   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:35.799303   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:35.834102   68995 cri.go:89] found id: ""
	I0719 19:36:35.834129   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.834139   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:35.834147   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:35.834217   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:35.867545   68995 cri.go:89] found id: ""
	I0719 19:36:35.867578   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.867588   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:35.867597   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:35.867653   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:35.902804   68995 cri.go:89] found id: ""
	I0719 19:36:35.902833   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.902856   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:35.902862   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:35.902917   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:35.935798   68995 cri.go:89] found id: ""
	I0719 19:36:35.935826   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.935851   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:35.935858   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:35.935918   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:35.968992   68995 cri.go:89] found id: ""
	I0719 19:36:35.969026   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.969037   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:35.969049   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:35.969068   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:36.054017   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:36.054050   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:36.094850   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:36.094875   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:36.145996   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:36.146030   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:36.159520   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:36.159549   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:36.227943   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:34.751556   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:36.754350   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:34.483433   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:36.982914   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:38.732952   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:41.231720   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:38.728958   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:38.742662   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:38.742727   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:38.779815   68995 cri.go:89] found id: ""
	I0719 19:36:38.779856   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.779867   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:38.779874   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:38.779938   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:38.814393   68995 cri.go:89] found id: ""
	I0719 19:36:38.814418   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.814428   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:38.814435   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:38.814496   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:38.848480   68995 cri.go:89] found id: ""
	I0719 19:36:38.848501   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.848508   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:38.848513   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:38.848565   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:38.883737   68995 cri.go:89] found id: ""
	I0719 19:36:38.883787   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.883796   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:38.883802   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:38.883887   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:38.921331   68995 cri.go:89] found id: ""
	I0719 19:36:38.921365   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.921376   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:38.921384   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:38.921432   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:38.957590   68995 cri.go:89] found id: ""
	I0719 19:36:38.957617   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.957625   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:38.957630   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:38.957689   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:38.992056   68995 cri.go:89] found id: ""
	I0719 19:36:38.992081   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.992091   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:38.992099   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:38.992162   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:39.026137   68995 cri.go:89] found id: ""
	I0719 19:36:39.026165   68995 logs.go:276] 0 containers: []
	W0719 19:36:39.026173   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:39.026181   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:39.026191   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:39.104975   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:39.105016   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:39.149479   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:39.149511   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:39.200003   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:39.200033   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:39.213690   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:39.213720   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:39.283772   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:41.784814   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:41.798307   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:41.798388   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:41.835130   68995 cri.go:89] found id: ""
	I0719 19:36:41.835160   68995 logs.go:276] 0 containers: []
	W0719 19:36:41.835181   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:41.835190   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:41.835248   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:41.871655   68995 cri.go:89] found id: ""
	I0719 19:36:41.871686   68995 logs.go:276] 0 containers: []
	W0719 19:36:41.871696   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:41.871702   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:41.871748   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:41.905291   68995 cri.go:89] found id: ""
	I0719 19:36:41.905321   68995 logs.go:276] 0 containers: []
	W0719 19:36:41.905332   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:41.905338   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:41.905387   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:41.938786   68995 cri.go:89] found id: ""
	I0719 19:36:41.938820   68995 logs.go:276] 0 containers: []
	W0719 19:36:41.938831   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:41.938839   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:41.938900   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:41.973146   68995 cri.go:89] found id: ""
	I0719 19:36:41.973170   68995 logs.go:276] 0 containers: []
	W0719 19:36:41.973178   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:41.973183   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:41.973230   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:39.251794   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:41.251895   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:39.483191   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:41.983709   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:43.233443   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:45.731442   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:42.007654   68995 cri.go:89] found id: ""
	I0719 19:36:42.007692   68995 logs.go:276] 0 containers: []
	W0719 19:36:42.007703   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:42.007713   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:42.007771   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:42.043419   68995 cri.go:89] found id: ""
	I0719 19:36:42.043454   68995 logs.go:276] 0 containers: []
	W0719 19:36:42.043465   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:42.043472   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:42.043550   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:42.079058   68995 cri.go:89] found id: ""
	I0719 19:36:42.079082   68995 logs.go:276] 0 containers: []
	W0719 19:36:42.079093   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:42.079106   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:42.079119   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:42.133699   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:42.133731   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:42.149143   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:42.149178   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:42.219380   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:42.219476   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:42.219493   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:42.300052   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:42.300089   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:44.837865   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:44.851088   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:44.851160   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:44.887593   68995 cri.go:89] found id: ""
	I0719 19:36:44.887625   68995 logs.go:276] 0 containers: []
	W0719 19:36:44.887636   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:44.887644   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:44.887704   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:44.923487   68995 cri.go:89] found id: ""
	I0719 19:36:44.923518   68995 logs.go:276] 0 containers: []
	W0719 19:36:44.923528   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:44.923535   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:44.923588   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:44.958530   68995 cri.go:89] found id: ""
	I0719 19:36:44.958552   68995 logs.go:276] 0 containers: []
	W0719 19:36:44.958560   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:44.958565   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:44.958612   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:44.993148   68995 cri.go:89] found id: ""
	I0719 19:36:44.993174   68995 logs.go:276] 0 containers: []
	W0719 19:36:44.993184   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:44.993191   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:44.993261   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:45.024580   68995 cri.go:89] found id: ""
	I0719 19:36:45.024608   68995 logs.go:276] 0 containers: []
	W0719 19:36:45.024619   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:45.024627   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:45.024687   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:45.061929   68995 cri.go:89] found id: ""
	I0719 19:36:45.061954   68995 logs.go:276] 0 containers: []
	W0719 19:36:45.061965   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:45.061974   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:45.062029   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:45.093445   68995 cri.go:89] found id: ""
	I0719 19:36:45.093473   68995 logs.go:276] 0 containers: []
	W0719 19:36:45.093482   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:45.093490   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:45.093549   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:45.129804   68995 cri.go:89] found id: ""
	I0719 19:36:45.129838   68995 logs.go:276] 0 containers: []
	W0719 19:36:45.129848   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:45.129860   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:45.129874   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:45.184333   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:45.184373   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:45.197516   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:45.197545   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:45.267374   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:45.267394   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:45.267411   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:45.343359   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:45.343400   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:43.751901   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:45.752529   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:44.482460   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:46.483629   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:48.231760   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:50.232025   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:47.880247   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:47.892644   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:47.892716   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:47.924838   68995 cri.go:89] found id: ""
	I0719 19:36:47.924866   68995 logs.go:276] 0 containers: []
	W0719 19:36:47.924874   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:47.924880   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:47.924936   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:47.956920   68995 cri.go:89] found id: ""
	I0719 19:36:47.956946   68995 logs.go:276] 0 containers: []
	W0719 19:36:47.956954   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:47.956960   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:47.957021   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:47.992589   68995 cri.go:89] found id: ""
	I0719 19:36:47.992614   68995 logs.go:276] 0 containers: []
	W0719 19:36:47.992622   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:47.992627   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:47.992681   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:48.028222   68995 cri.go:89] found id: ""
	I0719 19:36:48.028251   68995 logs.go:276] 0 containers: []
	W0719 19:36:48.028262   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:48.028270   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:48.028343   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:48.062122   68995 cri.go:89] found id: ""
	I0719 19:36:48.062144   68995 logs.go:276] 0 containers: []
	W0719 19:36:48.062152   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:48.062157   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:48.062202   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:48.095189   68995 cri.go:89] found id: ""
	I0719 19:36:48.095212   68995 logs.go:276] 0 containers: []
	W0719 19:36:48.095220   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:48.095226   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:48.095273   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:48.127726   68995 cri.go:89] found id: ""
	I0719 19:36:48.127756   68995 logs.go:276] 0 containers: []
	W0719 19:36:48.127764   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:48.127769   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:48.127820   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:48.160445   68995 cri.go:89] found id: ""
	I0719 19:36:48.160473   68995 logs.go:276] 0 containers: []
	W0719 19:36:48.160481   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:48.160490   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:48.160501   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:48.214196   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:48.214228   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:48.227942   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:48.227967   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:48.302651   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:48.302671   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:48.302684   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:48.382283   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:48.382325   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:50.921030   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:50.934783   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:50.934859   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:50.970236   68995 cri.go:89] found id: ""
	I0719 19:36:50.970268   68995 logs.go:276] 0 containers: []
	W0719 19:36:50.970282   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:50.970290   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:50.970349   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:51.003636   68995 cri.go:89] found id: ""
	I0719 19:36:51.003662   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.003669   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:51.003675   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:51.003735   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:51.035390   68995 cri.go:89] found id: ""
	I0719 19:36:51.035417   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.035425   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:51.035432   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:51.035496   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:51.069118   68995 cri.go:89] found id: ""
	I0719 19:36:51.069140   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.069148   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:51.069154   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:51.069207   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:51.102433   68995 cri.go:89] found id: ""
	I0719 19:36:51.102465   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.102476   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:51.102483   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:51.102547   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:51.135711   68995 cri.go:89] found id: ""
	I0719 19:36:51.135743   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.135755   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:51.135764   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:51.135856   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:51.168041   68995 cri.go:89] found id: ""
	I0719 19:36:51.168073   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.168083   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:51.168091   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:51.168153   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:51.205637   68995 cri.go:89] found id: ""
	I0719 19:36:51.205667   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.205674   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:51.205683   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:51.205696   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:51.245168   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:51.245197   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:51.295810   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:51.295863   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:51.308791   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:51.308813   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:51.385214   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:51.385239   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:51.385254   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:48.251909   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:50.750650   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:52.751426   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:48.983311   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:50.984588   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:52.232203   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:54.731934   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:53.959019   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:53.971044   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:53.971118   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:54.004971   68995 cri.go:89] found id: ""
	I0719 19:36:54.004996   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.005007   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:54.005014   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:54.005074   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:54.035794   68995 cri.go:89] found id: ""
	I0719 19:36:54.035820   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.035828   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:54.035849   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:54.035909   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:54.067783   68995 cri.go:89] found id: ""
	I0719 19:36:54.067811   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.067821   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:54.067828   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:54.067905   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:54.107403   68995 cri.go:89] found id: ""
	I0719 19:36:54.107424   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.107432   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:54.107437   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:54.107486   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:54.144411   68995 cri.go:89] found id: ""
	I0719 19:36:54.144434   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.144443   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:54.144448   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:54.144507   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:54.180416   68995 cri.go:89] found id: ""
	I0719 19:36:54.180444   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.180453   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:54.180459   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:54.180504   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:54.216444   68995 cri.go:89] found id: ""
	I0719 19:36:54.216476   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.216484   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:54.216489   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:54.216536   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:54.252652   68995 cri.go:89] found id: ""
	I0719 19:36:54.252672   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.252679   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:54.252686   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:54.252697   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:54.265888   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:54.265920   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:54.337271   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:54.337300   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:54.337316   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:54.414882   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:54.414916   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:54.452363   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:54.452387   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:55.252673   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:57.752884   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:53.483020   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:55.982973   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:57.983073   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:56.732297   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:58.733339   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:01.233006   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:57.004491   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:57.017581   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:57.017662   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:57.051318   68995 cri.go:89] found id: ""
	I0719 19:36:57.051347   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.051355   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:57.051362   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:57.051414   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:57.091361   68995 cri.go:89] found id: ""
	I0719 19:36:57.091386   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.091394   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:57.091399   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:57.091454   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:57.125970   68995 cri.go:89] found id: ""
	I0719 19:36:57.125998   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.126009   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:57.126016   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:57.126078   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:57.162767   68995 cri.go:89] found id: ""
	I0719 19:36:57.162794   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.162804   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:57.162811   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:57.162868   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:57.199712   68995 cri.go:89] found id: ""
	I0719 19:36:57.199742   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.199763   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:57.199770   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:57.199882   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:57.234782   68995 cri.go:89] found id: ""
	I0719 19:36:57.234812   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.234823   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:57.234830   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:57.234889   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:57.269789   68995 cri.go:89] found id: ""
	I0719 19:36:57.269817   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.269827   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:57.269835   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:57.269895   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:57.303323   68995 cri.go:89] found id: ""
	I0719 19:36:57.303350   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.303361   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:57.303371   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:57.303385   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:57.353434   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:57.353469   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:57.365982   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:57.366007   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:57.433767   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:57.433787   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:57.433801   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:57.514332   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:57.514372   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:00.053832   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:00.067665   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:00.067733   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:00.102502   68995 cri.go:89] found id: ""
	I0719 19:37:00.102529   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.102539   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:00.102547   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:00.102605   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:00.155051   68995 cri.go:89] found id: ""
	I0719 19:37:00.155088   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.155101   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:00.155109   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:00.155245   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:00.189002   68995 cri.go:89] found id: ""
	I0719 19:37:00.189032   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.189042   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:00.189049   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:00.189113   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:00.222057   68995 cri.go:89] found id: ""
	I0719 19:37:00.222083   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.222091   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:00.222097   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:00.222278   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:00.256815   68995 cri.go:89] found id: ""
	I0719 19:37:00.256839   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.256847   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:00.256853   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:00.256907   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:00.288710   68995 cri.go:89] found id: ""
	I0719 19:37:00.288742   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.288750   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:00.288756   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:00.288803   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:00.325994   68995 cri.go:89] found id: ""
	I0719 19:37:00.326019   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.326028   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:00.326034   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:00.326090   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:00.360401   68995 cri.go:89] found id: ""
	I0719 19:37:00.360434   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.360441   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:00.360450   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:00.360462   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:00.373448   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:00.373479   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:00.443796   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:00.443812   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:00.443825   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:00.529738   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:00.529777   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:00.571340   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:00.571367   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:00.251762   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:02.750698   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:59.984278   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:02.482260   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:03.731540   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:05.731921   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:03.121961   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:03.134075   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:03.134472   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:03.168000   68995 cri.go:89] found id: ""
	I0719 19:37:03.168033   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.168044   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:03.168053   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:03.168118   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:03.200331   68995 cri.go:89] found id: ""
	I0719 19:37:03.200362   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.200369   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:03.200376   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:03.200437   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:03.232472   68995 cri.go:89] found id: ""
	I0719 19:37:03.232500   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.232510   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:03.232517   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:03.232567   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:03.265173   68995 cri.go:89] found id: ""
	I0719 19:37:03.265200   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.265208   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:03.265216   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:03.265268   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:03.298983   68995 cri.go:89] found id: ""
	I0719 19:37:03.299011   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.299022   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:03.299029   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:03.299081   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:03.337990   68995 cri.go:89] found id: ""
	I0719 19:37:03.338024   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.338038   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:03.338046   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:03.338111   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:03.371374   68995 cri.go:89] found id: ""
	I0719 19:37:03.371402   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.371410   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:03.371415   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:03.371472   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:03.403890   68995 cri.go:89] found id: ""
	I0719 19:37:03.403915   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.403928   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:03.403939   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:03.403954   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:03.478826   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:03.478864   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:03.519208   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:03.519231   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:03.571603   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:03.571636   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:03.585154   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:03.585178   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:03.649156   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:06.149550   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:06.162598   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:06.162676   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:06.196635   68995 cri.go:89] found id: ""
	I0719 19:37:06.196681   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.196692   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:06.196701   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:06.196764   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:06.230855   68995 cri.go:89] found id: ""
	I0719 19:37:06.230882   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.230892   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:06.230899   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:06.230959   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:06.264715   68995 cri.go:89] found id: ""
	I0719 19:37:06.264754   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.264771   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:06.264779   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:06.264842   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:06.297725   68995 cri.go:89] found id: ""
	I0719 19:37:06.297751   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.297758   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:06.297765   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:06.297810   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:06.334943   68995 cri.go:89] found id: ""
	I0719 19:37:06.334970   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.334978   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:06.334984   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:06.335038   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:06.373681   68995 cri.go:89] found id: ""
	I0719 19:37:06.373716   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.373725   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:06.373730   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:06.373781   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:06.409528   68995 cri.go:89] found id: ""
	I0719 19:37:06.409554   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.409564   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:06.409609   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:06.409682   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:06.444636   68995 cri.go:89] found id: ""
	I0719 19:37:06.444671   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.444680   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:06.444715   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:06.444734   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:06.499678   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:06.499714   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:06.514790   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:06.514821   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:06.604576   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:06.604597   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:06.604615   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:06.690956   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:06.690989   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:04.752636   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:07.251643   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:04.484460   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:06.982794   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:08.231684   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:10.233049   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:09.228994   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:09.243444   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:09.243506   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:09.277208   68995 cri.go:89] found id: ""
	I0719 19:37:09.277229   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.277237   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:09.277247   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:09.277296   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:09.309077   68995 cri.go:89] found id: ""
	I0719 19:37:09.309102   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.309110   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:09.309115   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:09.309168   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:09.341212   68995 cri.go:89] found id: ""
	I0719 19:37:09.341247   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.341258   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:09.341265   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:09.341332   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:09.376373   68995 cri.go:89] found id: ""
	I0719 19:37:09.376400   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.376410   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:09.376416   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:09.376475   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:09.409175   68995 cri.go:89] found id: ""
	I0719 19:37:09.409204   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.409215   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:09.409222   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:09.409277   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:09.441481   68995 cri.go:89] found id: ""
	I0719 19:37:09.441509   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.441518   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:09.441526   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:09.441588   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:09.472907   68995 cri.go:89] found id: ""
	I0719 19:37:09.472935   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.472946   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:09.472954   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:09.473008   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:09.507776   68995 cri.go:89] found id: ""
	I0719 19:37:09.507801   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.507810   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:09.507818   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:09.507850   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:09.559873   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:09.559910   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:09.572650   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:09.572677   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:09.644352   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:09.644375   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:09.644387   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:09.721464   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:09.721504   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:09.252776   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:11.751354   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:08.983314   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:10.983634   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:12.732203   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:15.234561   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:12.264113   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:12.276702   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:12.276758   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:12.321359   68995 cri.go:89] found id: ""
	I0719 19:37:12.321386   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.321393   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:12.321400   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:12.321471   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:12.365402   68995 cri.go:89] found id: ""
	I0719 19:37:12.365433   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.365441   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:12.365447   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:12.365505   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:12.409431   68995 cri.go:89] found id: ""
	I0719 19:37:12.409459   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.409471   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:12.409478   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:12.409526   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:12.447628   68995 cri.go:89] found id: ""
	I0719 19:37:12.447656   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.447665   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:12.447671   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:12.447720   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:12.481760   68995 cri.go:89] found id: ""
	I0719 19:37:12.481796   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.481806   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:12.481813   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:12.481875   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:12.513430   68995 cri.go:89] found id: ""
	I0719 19:37:12.513460   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.513471   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:12.513477   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:12.513525   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:12.546140   68995 cri.go:89] found id: ""
	I0719 19:37:12.546171   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.546182   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:12.546190   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:12.546258   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:12.578741   68995 cri.go:89] found id: ""
	I0719 19:37:12.578764   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.578774   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:12.578785   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:12.578800   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:12.647584   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:12.647610   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:12.647625   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:12.721989   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:12.722027   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:12.761141   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:12.761168   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:12.811283   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:12.811334   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:15.327447   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:15.339987   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:15.340053   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:15.373692   68995 cri.go:89] found id: ""
	I0719 19:37:15.373720   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.373729   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:15.373737   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:15.373794   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:15.405801   68995 cri.go:89] found id: ""
	I0719 19:37:15.405831   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.405842   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:15.405849   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:15.405915   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:15.439395   68995 cri.go:89] found id: ""
	I0719 19:37:15.439424   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.439436   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:15.439444   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:15.439513   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:15.475944   68995 cri.go:89] found id: ""
	I0719 19:37:15.475967   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.475975   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:15.475981   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:15.476029   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:15.509593   68995 cri.go:89] found id: ""
	I0719 19:37:15.509613   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.509627   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:15.509634   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:15.509695   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:15.548840   68995 cri.go:89] found id: ""
	I0719 19:37:15.548871   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.548881   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:15.548887   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:15.548948   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:15.581096   68995 cri.go:89] found id: ""
	I0719 19:37:15.581139   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.581155   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:15.581163   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:15.581223   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:15.617428   68995 cri.go:89] found id: ""
	I0719 19:37:15.617458   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.617469   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:15.617479   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:15.617495   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:15.688196   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:15.688223   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:15.688239   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:15.770179   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:15.770216   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:15.807899   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:15.807924   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:15.858152   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:15.858186   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:13.751672   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:16.251465   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:13.482766   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:15.485709   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:17.982675   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:17.731547   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:19.732093   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:18.370857   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:18.383710   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:18.383781   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:18.417000   68995 cri.go:89] found id: ""
	I0719 19:37:18.417032   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.417044   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:18.417051   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:18.417149   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:18.455852   68995 cri.go:89] found id: ""
	I0719 19:37:18.455884   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.455900   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:18.455907   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:18.455970   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:18.491458   68995 cri.go:89] found id: ""
	I0719 19:37:18.491490   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.491501   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:18.491508   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:18.491573   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:18.525125   68995 cri.go:89] found id: ""
	I0719 19:37:18.525151   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.525158   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:18.525163   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:18.525210   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:18.560527   68995 cri.go:89] found id: ""
	I0719 19:37:18.560555   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.560566   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:18.560574   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:18.560624   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:18.595328   68995 cri.go:89] found id: ""
	I0719 19:37:18.595365   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.595378   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:18.595388   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:18.595445   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:18.627669   68995 cri.go:89] found id: ""
	I0719 19:37:18.627697   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.627705   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:18.627711   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:18.627765   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:18.660140   68995 cri.go:89] found id: ""
	I0719 19:37:18.660168   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.660177   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:18.660191   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:18.660206   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:18.733599   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:18.733628   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:18.774893   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:18.774920   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:18.827635   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:18.827672   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:18.841519   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:18.841545   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:18.904744   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:21.405683   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:21.420145   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:21.420224   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:21.456434   68995 cri.go:89] found id: ""
	I0719 19:37:21.456465   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.456477   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:21.456485   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:21.456543   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:21.489839   68995 cri.go:89] found id: ""
	I0719 19:37:21.489867   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.489874   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:21.489893   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:21.489945   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:21.521492   68995 cri.go:89] found id: ""
	I0719 19:37:21.521523   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.521533   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:21.521540   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:21.521602   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:21.553974   68995 cri.go:89] found id: ""
	I0719 19:37:21.553998   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.554008   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:21.554015   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:21.554075   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:21.589000   68995 cri.go:89] found id: ""
	I0719 19:37:21.589030   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.589042   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:21.589050   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:21.589119   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:21.622617   68995 cri.go:89] found id: ""
	I0719 19:37:21.622640   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.622648   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:21.622653   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:21.622702   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:21.654149   68995 cri.go:89] found id: ""
	I0719 19:37:21.654171   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.654178   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:21.654183   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:21.654241   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:21.687587   68995 cri.go:89] found id: ""
	I0719 19:37:21.687616   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.687628   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:21.687640   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:21.687656   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:21.699883   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:21.699910   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:21.769804   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:21.769825   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:21.769839   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:21.857167   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:21.857205   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:21.896025   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:21.896050   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:18.251893   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:20.751093   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:19.989216   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:22.483346   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:22.231129   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:24.231620   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:26.232522   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:24.444695   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:24.457239   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:24.457310   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:24.491968   68995 cri.go:89] found id: ""
	I0719 19:37:24.491990   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.491999   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:24.492003   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:24.492049   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:24.526039   68995 cri.go:89] found id: ""
	I0719 19:37:24.526061   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.526069   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:24.526074   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:24.526120   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:24.560153   68995 cri.go:89] found id: ""
	I0719 19:37:24.560184   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.560194   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:24.560201   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:24.560254   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:24.594331   68995 cri.go:89] found id: ""
	I0719 19:37:24.594359   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.594369   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:24.594375   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:24.594442   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:24.632389   68995 cri.go:89] found id: ""
	I0719 19:37:24.632416   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.632425   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:24.632430   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:24.632481   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:24.664448   68995 cri.go:89] found id: ""
	I0719 19:37:24.664473   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.664481   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:24.664487   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:24.664547   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:24.699510   68995 cri.go:89] found id: ""
	I0719 19:37:24.699533   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.699542   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:24.699547   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:24.699599   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:24.732624   68995 cri.go:89] found id: ""
	I0719 19:37:24.732646   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.732655   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:24.732665   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:24.732688   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:24.800136   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:24.800161   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:24.800176   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:24.883652   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:24.883696   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:24.922027   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:24.922052   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:24.975191   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:24.975235   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:23.251310   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:25.251598   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:27.751985   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:24.484050   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:26.982359   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:28.237361   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:30.734572   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:27.489307   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:27.503441   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:27.503500   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:27.536099   68995 cri.go:89] found id: ""
	I0719 19:37:27.536126   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.536134   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:27.536140   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:27.536187   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:27.567032   68995 cri.go:89] found id: ""
	I0719 19:37:27.567060   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.567068   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:27.567073   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:27.567133   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:27.599396   68995 cri.go:89] found id: ""
	I0719 19:37:27.599425   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.599436   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:27.599444   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:27.599510   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:27.634718   68995 cri.go:89] found id: ""
	I0719 19:37:27.634746   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.634755   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:27.634760   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:27.634809   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:27.672731   68995 cri.go:89] found id: ""
	I0719 19:37:27.672766   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.672778   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:27.672787   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:27.672852   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:27.706705   68995 cri.go:89] found id: ""
	I0719 19:37:27.706734   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.706743   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:27.706751   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:27.706814   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:27.740520   68995 cri.go:89] found id: ""
	I0719 19:37:27.740548   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.740556   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:27.740561   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:27.740614   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:27.772629   68995 cri.go:89] found id: ""
	I0719 19:37:27.772653   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.772660   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:27.772668   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:27.772680   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:27.826404   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:27.826432   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:27.839299   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:27.839331   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:27.903888   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:27.903908   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:27.903920   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:27.981384   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:27.981418   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:30.519503   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:30.532781   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:30.532867   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:30.566867   68995 cri.go:89] found id: ""
	I0719 19:37:30.566900   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.566911   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:30.566918   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:30.566979   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:30.599130   68995 cri.go:89] found id: ""
	I0719 19:37:30.599160   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.599171   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:30.599178   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:30.599241   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:30.632636   68995 cri.go:89] found id: ""
	I0719 19:37:30.632672   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.632685   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:30.632694   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:30.632767   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:30.665793   68995 cri.go:89] found id: ""
	I0719 19:37:30.665822   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.665837   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:30.665845   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:30.665908   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:30.699002   68995 cri.go:89] found id: ""
	I0719 19:37:30.699027   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.699034   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:30.699039   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:30.699098   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:30.733654   68995 cri.go:89] found id: ""
	I0719 19:37:30.733680   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.733690   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:30.733698   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:30.733757   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:30.766219   68995 cri.go:89] found id: ""
	I0719 19:37:30.766250   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.766260   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:30.766269   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:30.766344   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:30.800884   68995 cri.go:89] found id: ""
	I0719 19:37:30.800912   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.800922   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:30.800931   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:30.800946   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:30.851079   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:30.851114   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:30.863829   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:30.863873   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:30.932194   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:30.932219   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:30.932231   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:31.012345   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:31.012377   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:30.250950   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:32.251737   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:28.982804   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:30.983656   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:32.736607   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:35.232724   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:33.549156   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:33.561253   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:33.561318   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:33.593443   68995 cri.go:89] found id: ""
	I0719 19:37:33.593469   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.593479   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:33.593487   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:33.593545   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:33.626350   68995 cri.go:89] found id: ""
	I0719 19:37:33.626371   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.626380   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:33.626385   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:33.626434   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:33.660751   68995 cri.go:89] found id: ""
	I0719 19:37:33.660789   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.660797   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:33.660803   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:33.660857   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:33.696342   68995 cri.go:89] found id: ""
	I0719 19:37:33.696380   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.696392   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:33.696399   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:33.696465   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:33.728815   68995 cri.go:89] found id: ""
	I0719 19:37:33.728841   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.728850   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:33.728860   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:33.728922   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:33.765055   68995 cri.go:89] found id: ""
	I0719 19:37:33.765088   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.765095   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:33.765101   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:33.765150   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:33.798600   68995 cri.go:89] found id: ""
	I0719 19:37:33.798627   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.798635   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:33.798640   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:33.798687   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:33.831037   68995 cri.go:89] found id: ""
	I0719 19:37:33.831065   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.831075   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:33.831085   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:33.831100   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:33.880454   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:33.880485   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:33.894966   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:33.894996   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:33.964170   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:33.964200   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:33.964215   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:34.046534   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:34.046565   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:36.587779   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:36.601141   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:36.601211   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:36.634499   68995 cri.go:89] found id: ""
	I0719 19:37:36.634527   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.634536   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:36.634542   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:36.634606   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:36.666890   68995 cri.go:89] found id: ""
	I0719 19:37:36.666917   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.666925   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:36.666931   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:36.666986   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:36.698970   68995 cri.go:89] found id: ""
	I0719 19:37:36.699001   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.699016   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:36.699023   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:36.699078   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:36.731334   68995 cri.go:89] found id: ""
	I0719 19:37:36.731368   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.731379   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:36.731386   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:36.731446   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:36.764358   68995 cri.go:89] found id: ""
	I0719 19:37:36.764382   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.764394   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:36.764400   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:36.764455   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:36.798707   68995 cri.go:89] found id: ""
	I0719 19:37:36.798734   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.798742   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:36.798747   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:36.798797   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:36.831226   68995 cri.go:89] found id: ""
	I0719 19:37:36.831255   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.831266   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:36.831273   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:36.831331   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:36.863480   68995 cri.go:89] found id: ""
	I0719 19:37:36.863506   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.863513   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:36.863522   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:36.863536   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:36.915869   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:36.915908   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:36.930750   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:36.930781   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 19:37:34.750463   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:36.751189   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:33.484028   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:35.984592   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:37.233215   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:39.732096   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	W0719 19:37:37.012674   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:37.012694   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:37.012708   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:37.091337   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:37.091375   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:39.630016   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:39.642979   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:39.643041   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:39.675984   68995 cri.go:89] found id: ""
	I0719 19:37:39.676013   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.676023   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:39.676031   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:39.676100   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:39.707035   68995 cri.go:89] found id: ""
	I0719 19:37:39.707061   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.707068   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:39.707073   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:39.707131   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:39.744471   68995 cri.go:89] found id: ""
	I0719 19:37:39.744496   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.744503   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:39.744508   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:39.744561   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:39.778135   68995 cri.go:89] found id: ""
	I0719 19:37:39.778162   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.778171   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:39.778179   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:39.778238   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:39.811690   68995 cri.go:89] found id: ""
	I0719 19:37:39.811717   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.811727   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:39.811735   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:39.811793   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:39.843697   68995 cri.go:89] found id: ""
	I0719 19:37:39.843728   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.843740   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:39.843751   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:39.843801   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:39.877406   68995 cri.go:89] found id: ""
	I0719 19:37:39.877436   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.877448   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:39.877455   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:39.877520   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:39.909759   68995 cri.go:89] found id: ""
	I0719 19:37:39.909783   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.909790   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:39.909799   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:39.909809   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:39.958364   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:39.958393   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:39.973472   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:39.973512   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:40.040299   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:40.040323   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:40.040338   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:40.117663   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:40.117702   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:38.751436   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:40.751476   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:42.752002   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:38.483729   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:40.983938   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:42.232962   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:44.732151   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:42.657713   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:42.672030   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:42.672096   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:42.706163   68995 cri.go:89] found id: ""
	I0719 19:37:42.706193   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.706204   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:42.706212   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:42.706271   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:42.740026   68995 cri.go:89] found id: ""
	I0719 19:37:42.740053   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.740063   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:42.740070   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:42.740132   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:42.772413   68995 cri.go:89] found id: ""
	I0719 19:37:42.772436   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.772443   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:42.772448   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:42.772493   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:42.803898   68995 cri.go:89] found id: ""
	I0719 19:37:42.803925   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.803936   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:42.803943   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:42.804007   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:42.835062   68995 cri.go:89] found id: ""
	I0719 19:37:42.835091   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.835100   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:42.835108   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:42.835181   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:42.867382   68995 cri.go:89] found id: ""
	I0719 19:37:42.867411   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.867420   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:42.867427   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:42.867513   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:42.899461   68995 cri.go:89] found id: ""
	I0719 19:37:42.899483   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.899491   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:42.899496   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:42.899542   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:42.932789   68995 cri.go:89] found id: ""
	I0719 19:37:42.932818   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.932827   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:42.932837   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:42.932854   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:42.945144   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:42.945174   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:43.013369   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:43.013393   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:43.013408   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:43.093227   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:43.093263   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:43.134429   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:43.134454   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:45.684772   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:45.697903   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:45.697958   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:45.731984   68995 cri.go:89] found id: ""
	I0719 19:37:45.732007   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.732015   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:45.732021   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:45.732080   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:45.767631   68995 cri.go:89] found id: ""
	I0719 19:37:45.767660   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.767670   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:45.767678   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:45.767739   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:45.800822   68995 cri.go:89] found id: ""
	I0719 19:37:45.800849   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.800860   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:45.800871   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:45.800928   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:45.834313   68995 cri.go:89] found id: ""
	I0719 19:37:45.834345   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.834354   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:45.834360   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:45.834417   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:45.868659   68995 cri.go:89] found id: ""
	I0719 19:37:45.868690   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.868701   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:45.868709   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:45.868780   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:45.900779   68995 cri.go:89] found id: ""
	I0719 19:37:45.900806   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.900816   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:45.900824   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:45.900886   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:45.932767   68995 cri.go:89] found id: ""
	I0719 19:37:45.932796   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.932806   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:45.932811   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:45.932872   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:45.967785   68995 cri.go:89] found id: ""
	I0719 19:37:45.967814   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.967822   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:45.967839   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:45.967852   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:46.034598   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:46.034623   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:46.034642   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:46.124237   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:46.124286   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:46.164874   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:46.164904   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:46.216679   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:46.216713   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:45.251369   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:47.251447   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:43.483160   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:45.983286   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:47.983452   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:46.732579   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:49.232449   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:51.232677   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:48.730444   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:48.743209   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:48.743283   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:48.780940   68995 cri.go:89] found id: ""
	I0719 19:37:48.780974   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.780982   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:48.780987   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:48.781037   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:48.814186   68995 cri.go:89] found id: ""
	I0719 19:37:48.814213   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.814221   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:48.814230   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:48.814310   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:48.845013   68995 cri.go:89] found id: ""
	I0719 19:37:48.845042   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.845056   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:48.845065   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:48.845139   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:48.878777   68995 cri.go:89] found id: ""
	I0719 19:37:48.878799   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.878807   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:48.878812   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:48.878858   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:48.908926   68995 cri.go:89] found id: ""
	I0719 19:37:48.908954   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.908965   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:48.908972   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:48.909032   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:48.942442   68995 cri.go:89] found id: ""
	I0719 19:37:48.942473   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.942483   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:48.942491   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:48.942554   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:48.974213   68995 cri.go:89] found id: ""
	I0719 19:37:48.974240   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.974249   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:48.974254   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:48.974308   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:49.010041   68995 cri.go:89] found id: ""
	I0719 19:37:49.010067   68995 logs.go:276] 0 containers: []
	W0719 19:37:49.010076   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:49.010085   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:49.010098   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:49.060387   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:49.060419   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:49.073748   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:49.073773   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:49.139397   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:49.139422   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:49.139439   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:49.216748   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:49.216792   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:51.759657   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:51.771457   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:51.771538   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:51.804841   68995 cri.go:89] found id: ""
	I0719 19:37:51.804870   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.804881   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:51.804886   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:51.804937   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:51.838235   68995 cri.go:89] found id: ""
	I0719 19:37:51.838264   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.838274   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:51.838282   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:51.838363   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:51.885664   68995 cri.go:89] found id: ""
	I0719 19:37:51.885689   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.885699   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:51.885706   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:51.885769   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:51.917883   68995 cri.go:89] found id: ""
	I0719 19:37:51.917910   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.917920   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:51.917927   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:51.917990   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:51.949856   68995 cri.go:89] found id: ""
	I0719 19:37:51.949889   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.949898   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:51.949905   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:51.949965   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:51.984016   68995 cri.go:89] found id: ""
	I0719 19:37:51.984060   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.984072   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:51.984081   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:51.984134   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:49.752557   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:52.251462   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:50.483240   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:52.484284   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:53.731744   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:55.733040   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:52.020853   68995 cri.go:89] found id: ""
	I0719 19:37:52.020883   68995 logs.go:276] 0 containers: []
	W0719 19:37:52.020895   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:52.020903   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:52.020958   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:52.054285   68995 cri.go:89] found id: ""
	I0719 19:37:52.054310   68995 logs.go:276] 0 containers: []
	W0719 19:37:52.054321   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:52.054329   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:52.054341   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:52.091083   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:52.091108   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:52.143946   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:52.143981   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:52.158573   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:52.158600   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:52.232029   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:52.232052   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:52.232065   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:54.811805   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:54.825343   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:54.825404   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:54.859915   68995 cri.go:89] found id: ""
	I0719 19:37:54.859943   68995 logs.go:276] 0 containers: []
	W0719 19:37:54.859953   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:54.859961   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:54.860022   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:54.892199   68995 cri.go:89] found id: ""
	I0719 19:37:54.892234   68995 logs.go:276] 0 containers: []
	W0719 19:37:54.892245   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:54.892252   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:54.892316   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:54.925714   68995 cri.go:89] found id: ""
	I0719 19:37:54.925746   68995 logs.go:276] 0 containers: []
	W0719 19:37:54.925757   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:54.925765   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:54.925833   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:54.958677   68995 cri.go:89] found id: ""
	I0719 19:37:54.958717   68995 logs.go:276] 0 containers: []
	W0719 19:37:54.958729   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:54.958736   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:54.958803   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:54.994945   68995 cri.go:89] found id: ""
	I0719 19:37:54.994976   68995 logs.go:276] 0 containers: []
	W0719 19:37:54.994986   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:54.994994   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:54.995073   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:55.028149   68995 cri.go:89] found id: ""
	I0719 19:37:55.028175   68995 logs.go:276] 0 containers: []
	W0719 19:37:55.028185   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:55.028194   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:55.028255   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:55.062698   68995 cri.go:89] found id: ""
	I0719 19:37:55.062731   68995 logs.go:276] 0 containers: []
	W0719 19:37:55.062744   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:55.062753   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:55.062812   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:55.095864   68995 cri.go:89] found id: ""
	I0719 19:37:55.095892   68995 logs.go:276] 0 containers: []
	W0719 19:37:55.095903   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:55.095913   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:55.095928   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:55.152201   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:55.152235   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:55.165503   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:55.165528   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:55.232405   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:55.232429   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:55.232444   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:55.315444   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:55.315496   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:54.752510   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:57.251379   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:54.984369   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:57.482226   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:58.231419   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:00.733424   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:57.852984   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:57.865718   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:57.865784   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:57.898531   68995 cri.go:89] found id: ""
	I0719 19:37:57.898557   68995 logs.go:276] 0 containers: []
	W0719 19:37:57.898565   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:57.898570   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:57.898625   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:57.933746   68995 cri.go:89] found id: ""
	I0719 19:37:57.933774   68995 logs.go:276] 0 containers: []
	W0719 19:37:57.933784   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:57.933791   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:57.933851   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:57.974429   68995 cri.go:89] found id: ""
	I0719 19:37:57.974466   68995 logs.go:276] 0 containers: []
	W0719 19:37:57.974479   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:57.974488   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:57.974560   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:58.011209   68995 cri.go:89] found id: ""
	I0719 19:37:58.011233   68995 logs.go:276] 0 containers: []
	W0719 19:37:58.011243   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:58.011249   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:58.011313   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:58.049309   68995 cri.go:89] found id: ""
	I0719 19:37:58.049337   68995 logs.go:276] 0 containers: []
	W0719 19:37:58.049349   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:58.049357   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:58.049419   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:58.080873   68995 cri.go:89] found id: ""
	I0719 19:37:58.080902   68995 logs.go:276] 0 containers: []
	W0719 19:37:58.080913   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:58.080920   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:58.080979   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:58.113661   68995 cri.go:89] found id: ""
	I0719 19:37:58.113689   68995 logs.go:276] 0 containers: []
	W0719 19:37:58.113698   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:58.113706   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:58.113845   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:58.147431   68995 cri.go:89] found id: ""
	I0719 19:37:58.147459   68995 logs.go:276] 0 containers: []
	W0719 19:37:58.147469   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:58.147480   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:58.147495   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:58.160170   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:58.160195   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:58.222818   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:58.222858   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:58.222874   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:58.304895   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:58.304929   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:58.354462   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:58.354492   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:00.905865   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:00.918105   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:00.918180   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:00.952085   68995 cri.go:89] found id: ""
	I0719 19:38:00.952111   68995 logs.go:276] 0 containers: []
	W0719 19:38:00.952122   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:00.952130   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:00.952192   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:00.985966   68995 cri.go:89] found id: ""
	I0719 19:38:00.985990   68995 logs.go:276] 0 containers: []
	W0719 19:38:00.985999   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:00.986011   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:00.986059   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:01.021869   68995 cri.go:89] found id: ""
	I0719 19:38:01.021901   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.021910   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:01.021915   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:01.021961   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:01.055821   68995 cri.go:89] found id: ""
	I0719 19:38:01.055867   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.055878   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:01.055886   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:01.055950   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:01.090922   68995 cri.go:89] found id: ""
	I0719 19:38:01.090945   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.090952   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:01.090958   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:01.091007   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:01.122779   68995 cri.go:89] found id: ""
	I0719 19:38:01.122808   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.122824   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:01.122831   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:01.122896   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:01.159436   68995 cri.go:89] found id: ""
	I0719 19:38:01.159464   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.159473   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:01.159479   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:01.159531   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:01.191500   68995 cri.go:89] found id: ""
	I0719 19:38:01.191524   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.191532   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:01.191541   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:01.191551   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:01.227143   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:01.227173   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:01.278336   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:01.278363   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:01.291658   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:01.291687   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:01.357320   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:01.357346   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:01.357362   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:59.251599   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:01.252565   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:59.482653   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:01.483865   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:03.232078   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:05.731641   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:03.938679   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:03.951235   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:03.951295   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:03.987844   68995 cri.go:89] found id: ""
	I0719 19:38:03.987875   68995 logs.go:276] 0 containers: []
	W0719 19:38:03.987887   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:03.987894   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:03.987959   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:04.030628   68995 cri.go:89] found id: ""
	I0719 19:38:04.030654   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.030665   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:04.030672   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:04.030744   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:04.077913   68995 cri.go:89] found id: ""
	I0719 19:38:04.077942   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.077950   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:04.077955   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:04.078022   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:04.130065   68995 cri.go:89] found id: ""
	I0719 19:38:04.130098   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.130109   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:04.130116   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:04.130181   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:04.162061   68995 cri.go:89] found id: ""
	I0719 19:38:04.162089   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.162097   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:04.162103   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:04.162163   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:04.194526   68995 cri.go:89] found id: ""
	I0719 19:38:04.194553   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.194560   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:04.194566   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:04.194613   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:04.226387   68995 cri.go:89] found id: ""
	I0719 19:38:04.226418   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.226429   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:04.226436   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:04.226569   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:04.263346   68995 cri.go:89] found id: ""
	I0719 19:38:04.263375   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.263393   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:04.263404   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:04.263420   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:04.321110   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:04.321140   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:04.334294   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:04.334318   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:04.398556   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:04.398577   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:04.398592   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:04.485108   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:04.485139   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:03.751057   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:05.751335   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:03.983215   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:06.482374   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:08.231511   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:10.232368   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:07.022235   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:07.034595   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:07.034658   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:07.067454   68995 cri.go:89] found id: ""
	I0719 19:38:07.067482   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.067492   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:07.067499   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:07.067556   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:07.101154   68995 cri.go:89] found id: ""
	I0719 19:38:07.101183   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.101192   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:07.101206   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:07.101269   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:07.136084   68995 cri.go:89] found id: ""
	I0719 19:38:07.136110   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.136118   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:07.136123   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:07.136173   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:07.173278   68995 cri.go:89] found id: ""
	I0719 19:38:07.173304   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.173312   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:07.173317   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:07.173380   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:07.209680   68995 cri.go:89] found id: ""
	I0719 19:38:07.209707   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.209719   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:07.209727   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:07.209788   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:07.242425   68995 cri.go:89] found id: ""
	I0719 19:38:07.242453   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.242462   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:07.242471   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:07.242538   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:07.276112   68995 cri.go:89] found id: ""
	I0719 19:38:07.276137   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.276147   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:07.276153   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:07.276216   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:07.309460   68995 cri.go:89] found id: ""
	I0719 19:38:07.309489   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.309500   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:07.309514   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:07.309528   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:07.361611   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:07.361650   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:07.375905   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:07.375942   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:07.441390   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:07.441412   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:07.441426   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:07.522023   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:07.522068   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:10.061774   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:10.074620   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:10.074705   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:10.107260   68995 cri.go:89] found id: ""
	I0719 19:38:10.107284   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.107293   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:10.107298   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:10.107345   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:10.142704   68995 cri.go:89] found id: ""
	I0719 19:38:10.142731   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.142738   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:10.142744   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:10.142793   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:10.174713   68995 cri.go:89] found id: ""
	I0719 19:38:10.174751   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.174762   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:10.174770   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:10.174837   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:10.205687   68995 cri.go:89] found id: ""
	I0719 19:38:10.205711   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.205719   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:10.205724   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:10.205782   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:10.238754   68995 cri.go:89] found id: ""
	I0719 19:38:10.238781   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.238789   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:10.238795   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:10.238844   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:10.271239   68995 cri.go:89] found id: ""
	I0719 19:38:10.271264   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.271274   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:10.271281   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:10.271380   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:10.306944   68995 cri.go:89] found id: ""
	I0719 19:38:10.306968   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.306975   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:10.306980   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:10.307030   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:10.336681   68995 cri.go:89] found id: ""
	I0719 19:38:10.336708   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.336716   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:10.336725   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:10.336736   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:10.376368   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:10.376405   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:10.432676   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:10.432722   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:10.446932   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:10.446965   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:10.514285   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:10.514307   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:10.514335   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:08.251480   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:10.252276   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:12.751674   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:08.482753   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:10.483719   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:12.982523   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:12.732172   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:15.232040   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:13.097845   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:13.110781   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:13.110862   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:13.144732   68995 cri.go:89] found id: ""
	I0719 19:38:13.144760   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.144772   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:13.144779   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:13.144840   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:13.176313   68995 cri.go:89] found id: ""
	I0719 19:38:13.176356   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.176364   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:13.176372   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:13.176437   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:13.208219   68995 cri.go:89] found id: ""
	I0719 19:38:13.208249   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.208260   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:13.208267   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:13.208324   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:13.242977   68995 cri.go:89] found id: ""
	I0719 19:38:13.243004   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.243014   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:13.243022   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:13.243080   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:13.275356   68995 cri.go:89] found id: ""
	I0719 19:38:13.275382   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.275389   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:13.275396   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:13.275443   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:13.310244   68995 cri.go:89] found id: ""
	I0719 19:38:13.310275   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.310283   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:13.310290   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:13.310356   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:13.342626   68995 cri.go:89] found id: ""
	I0719 19:38:13.342652   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.342660   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:13.342667   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:13.342737   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:13.374943   68995 cri.go:89] found id: ""
	I0719 19:38:13.374965   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.374973   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:13.374983   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:13.374993   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:13.429940   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:13.429975   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:13.443636   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:13.443677   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:13.514439   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:13.514468   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:13.514481   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:13.595337   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:13.595371   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:16.132302   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:16.145088   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:16.145148   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:16.176994   68995 cri.go:89] found id: ""
	I0719 19:38:16.177023   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.177034   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:16.177042   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:16.177103   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:16.208730   68995 cri.go:89] found id: ""
	I0719 19:38:16.208760   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.208771   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:16.208779   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:16.208843   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:16.244708   68995 cri.go:89] found id: ""
	I0719 19:38:16.244736   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.244747   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:16.244755   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:16.244816   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:16.279211   68995 cri.go:89] found id: ""
	I0719 19:38:16.279237   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.279245   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:16.279250   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:16.279328   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:16.312856   68995 cri.go:89] found id: ""
	I0719 19:38:16.312891   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.312902   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:16.312912   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:16.312980   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:16.347598   68995 cri.go:89] found id: ""
	I0719 19:38:16.347623   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.347630   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:16.347635   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:16.347695   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:16.381927   68995 cri.go:89] found id: ""
	I0719 19:38:16.381957   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.381966   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:16.381973   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:16.382035   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:16.415020   68995 cri.go:89] found id: ""
	I0719 19:38:16.415053   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.415061   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:16.415071   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:16.415084   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:16.428268   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:16.428304   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:16.498869   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:16.498887   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:16.498902   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:16.581237   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:16.581271   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:16.616727   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:16.616755   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:15.251690   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:17.752442   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:15.483131   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:17.484337   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:17.232857   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:19.233157   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:19.172304   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:19.185206   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:19.185282   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:19.218156   68995 cri.go:89] found id: ""
	I0719 19:38:19.218189   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.218200   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:19.218207   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:19.218277   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:19.251173   68995 cri.go:89] found id: ""
	I0719 19:38:19.251201   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.251211   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:19.251219   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:19.251280   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:19.283537   68995 cri.go:89] found id: ""
	I0719 19:38:19.283566   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.283578   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:19.283586   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:19.283654   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:19.315995   68995 cri.go:89] found id: ""
	I0719 19:38:19.316026   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.316047   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:19.316054   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:19.316130   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:19.348917   68995 cri.go:89] found id: ""
	I0719 19:38:19.348940   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.348949   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:19.348954   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:19.349001   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:19.382174   68995 cri.go:89] found id: ""
	I0719 19:38:19.382206   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.382220   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:19.382228   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:19.382290   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:19.416178   68995 cri.go:89] found id: ""
	I0719 19:38:19.416208   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.416217   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:19.416222   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:19.416268   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:19.449333   68995 cri.go:89] found id: ""
	I0719 19:38:19.449364   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.449375   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:19.449385   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:19.449397   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:19.503673   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:19.503708   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:19.517815   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:19.517847   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:19.583811   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:19.583849   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:19.583865   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:19.658742   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:19.658780   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:20.252451   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:22.752922   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:19.982854   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:21.984114   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:21.732342   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:24.231957   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:26.232440   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:22.197792   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:22.211464   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:22.211526   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:22.248860   68995 cri.go:89] found id: ""
	I0719 19:38:22.248891   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.248899   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:22.248905   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:22.248967   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:22.293173   68995 cri.go:89] found id: ""
	I0719 19:38:22.293198   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.293206   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:22.293212   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:22.293268   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:22.328826   68995 cri.go:89] found id: ""
	I0719 19:38:22.328857   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.328868   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:22.328875   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:22.328939   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:22.361560   68995 cri.go:89] found id: ""
	I0719 19:38:22.361590   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.361601   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:22.361608   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:22.361668   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:22.400899   68995 cri.go:89] found id: ""
	I0719 19:38:22.400927   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.400935   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:22.400941   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:22.400988   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:22.437159   68995 cri.go:89] found id: ""
	I0719 19:38:22.437185   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.437195   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:22.437202   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:22.437264   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:22.472424   68995 cri.go:89] found id: ""
	I0719 19:38:22.472454   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.472464   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:22.472472   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:22.472532   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:22.507649   68995 cri.go:89] found id: ""
	I0719 19:38:22.507678   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.507688   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:22.507704   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:22.507718   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:22.559616   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:22.559649   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:22.573436   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:22.573468   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:22.643245   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:22.643279   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:22.643319   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:22.721532   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:22.721571   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:25.264435   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:25.277324   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:25.277400   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:25.308328   68995 cri.go:89] found id: ""
	I0719 19:38:25.308359   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.308370   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:25.308377   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:25.308429   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:25.344100   68995 cri.go:89] found id: ""
	I0719 19:38:25.344130   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.344140   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:25.344148   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:25.344207   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:25.379393   68995 cri.go:89] found id: ""
	I0719 19:38:25.379430   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.379440   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:25.379447   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:25.379495   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:25.417344   68995 cri.go:89] found id: ""
	I0719 19:38:25.417369   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.417377   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:25.417383   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:25.417467   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:25.450753   68995 cri.go:89] found id: ""
	I0719 19:38:25.450778   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.450787   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:25.450793   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:25.450845   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:25.483479   68995 cri.go:89] found id: ""
	I0719 19:38:25.483503   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.483512   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:25.483520   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:25.483585   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:25.514954   68995 cri.go:89] found id: ""
	I0719 19:38:25.514978   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.514986   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:25.514992   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:25.515040   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:25.547802   68995 cri.go:89] found id: ""
	I0719 19:38:25.547828   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.547847   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:25.547858   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:25.547872   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:25.599643   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:25.599679   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:25.612587   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:25.612613   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:25.680267   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:25.680283   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:25.680294   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:25.764185   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:25.764215   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:25.253957   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:27.751614   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:24.482926   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:26.982157   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:28.731943   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:30.732225   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:28.309615   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:28.322513   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:28.322590   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:28.354692   68995 cri.go:89] found id: ""
	I0719 19:38:28.354728   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.354738   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:28.354746   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:28.354819   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:28.392144   68995 cri.go:89] found id: ""
	I0719 19:38:28.392170   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.392180   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:28.392187   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:28.392245   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:28.428196   68995 cri.go:89] found id: ""
	I0719 19:38:28.428226   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.428235   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:28.428243   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:28.428303   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:28.460353   68995 cri.go:89] found id: ""
	I0719 19:38:28.460385   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.460396   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:28.460403   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:28.460465   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:28.493143   68995 cri.go:89] found id: ""
	I0719 19:38:28.493168   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.493177   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:28.493184   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:28.493240   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:28.527379   68995 cri.go:89] found id: ""
	I0719 19:38:28.527404   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.527414   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:28.527422   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:28.527477   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:28.561506   68995 cri.go:89] found id: ""
	I0719 19:38:28.561531   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.561547   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:28.561555   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:28.561615   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:28.593832   68995 cri.go:89] found id: ""
	I0719 19:38:28.593859   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.593868   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:28.593878   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:28.593894   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:28.606980   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:28.607011   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:28.680957   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:28.680984   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:28.680997   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:28.755705   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:28.755738   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:28.793362   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:28.793397   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:31.343185   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:31.355903   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:31.355980   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:31.390158   68995 cri.go:89] found id: ""
	I0719 19:38:31.390188   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.390198   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:31.390203   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:31.390256   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:31.422743   68995 cri.go:89] found id: ""
	I0719 19:38:31.422777   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.422788   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:31.422796   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:31.422858   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:31.457886   68995 cri.go:89] found id: ""
	I0719 19:38:31.457912   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.457922   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:31.457927   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:31.457987   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:31.493625   68995 cri.go:89] found id: ""
	I0719 19:38:31.493648   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.493655   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:31.493670   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:31.493724   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:31.525764   68995 cri.go:89] found id: ""
	I0719 19:38:31.525789   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.525797   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:31.525803   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:31.525857   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:31.558453   68995 cri.go:89] found id: ""
	I0719 19:38:31.558478   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.558489   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:31.558499   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:31.558565   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:31.597875   68995 cri.go:89] found id: ""
	I0719 19:38:31.597901   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.597911   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:31.597918   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:31.597979   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:31.631688   68995 cri.go:89] found id: ""
	I0719 19:38:31.631712   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.631722   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:31.631730   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:31.631744   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:31.696991   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:31.697017   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:31.697032   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:31.782668   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:31.782712   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:31.822052   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:31.822091   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:31.876818   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:31.876857   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:30.250702   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:32.251104   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:29.482806   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:31.483225   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:33.232825   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:35.731586   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:34.390680   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:34.403662   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:34.403718   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:34.437627   68995 cri.go:89] found id: ""
	I0719 19:38:34.437653   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.437661   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:34.437668   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:34.437722   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:34.470906   68995 cri.go:89] found id: ""
	I0719 19:38:34.470935   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.470947   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:34.470955   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:34.471019   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:34.504424   68995 cri.go:89] found id: ""
	I0719 19:38:34.504452   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.504464   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:34.504472   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:34.504530   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:34.538610   68995 cri.go:89] found id: ""
	I0719 19:38:34.538638   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.538649   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:34.538657   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:34.538728   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:34.571473   68995 cri.go:89] found id: ""
	I0719 19:38:34.571500   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.571511   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:34.571518   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:34.571592   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:34.608465   68995 cri.go:89] found id: ""
	I0719 19:38:34.608499   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.608509   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:34.608516   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:34.608581   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:34.650110   68995 cri.go:89] found id: ""
	I0719 19:38:34.650140   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.650151   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:34.650158   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:34.650220   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:34.683386   68995 cri.go:89] found id: ""
	I0719 19:38:34.683409   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.683416   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:34.683427   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:34.683443   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:34.736264   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:34.736294   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:34.751293   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:34.751338   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:34.818817   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:34.818838   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:34.818854   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:34.899383   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:34.899419   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:34.252210   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:36.754343   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:33.984014   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:35.984811   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:37.732623   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:40.231802   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:37.438455   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:37.451494   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:37.451563   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:37.485908   68995 cri.go:89] found id: ""
	I0719 19:38:37.485934   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.485943   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:37.485950   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:37.486006   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:37.519156   68995 cri.go:89] found id: ""
	I0719 19:38:37.519183   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.519193   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:37.519200   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:37.519257   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:37.561498   68995 cri.go:89] found id: ""
	I0719 19:38:37.561530   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.561541   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:37.561548   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:37.561608   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:37.598971   68995 cri.go:89] found id: ""
	I0719 19:38:37.599002   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.599012   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:37.599019   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:37.599075   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:37.641257   68995 cri.go:89] found id: ""
	I0719 19:38:37.641304   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.641315   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:37.641323   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:37.641376   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:37.676572   68995 cri.go:89] found id: ""
	I0719 19:38:37.676596   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.676605   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:37.676610   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:37.676671   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:37.712105   68995 cri.go:89] found id: ""
	I0719 19:38:37.712131   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.712139   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:37.712145   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:37.712201   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:37.747850   68995 cri.go:89] found id: ""
	I0719 19:38:37.747879   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.747889   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:37.747899   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:37.747912   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:37.800904   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:37.800939   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:37.814264   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:37.814293   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:37.891872   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:37.891897   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:37.891913   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:37.967997   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:37.968033   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:40.511969   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:40.525191   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:40.525259   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:40.559619   68995 cri.go:89] found id: ""
	I0719 19:38:40.559651   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.559663   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:40.559671   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:40.559730   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:40.592775   68995 cri.go:89] found id: ""
	I0719 19:38:40.592804   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.592812   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:40.592817   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:40.592873   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:40.625524   68995 cri.go:89] found id: ""
	I0719 19:38:40.625546   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.625553   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:40.625558   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:40.625624   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:40.662117   68995 cri.go:89] found id: ""
	I0719 19:38:40.662151   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.662163   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:40.662171   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:40.662231   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:40.694835   68995 cri.go:89] found id: ""
	I0719 19:38:40.694868   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.694877   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:40.694885   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:40.694944   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:40.726879   68995 cri.go:89] found id: ""
	I0719 19:38:40.726905   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.726913   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:40.726919   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:40.726975   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:40.764663   68995 cri.go:89] found id: ""
	I0719 19:38:40.764688   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.764700   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:40.764707   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:40.764752   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:40.796684   68995 cri.go:89] found id: ""
	I0719 19:38:40.796711   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.796718   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:40.796726   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:40.796736   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:40.834549   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:40.834588   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:40.886521   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:40.886561   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:40.900162   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:40.900199   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:40.969263   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:40.969285   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:40.969319   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:39.250513   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:41.251724   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:38.482887   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:40.484413   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:42.982430   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:42.232190   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:44.232617   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:43.545375   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:43.559091   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:43.559174   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:43.591901   68995 cri.go:89] found id: ""
	I0719 19:38:43.591929   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.591939   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:43.591946   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:43.592009   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:43.656005   68995 cri.go:89] found id: ""
	I0719 19:38:43.656035   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.656046   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:43.656055   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:43.656126   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:43.690049   68995 cri.go:89] found id: ""
	I0719 19:38:43.690073   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.690085   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:43.690092   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:43.690150   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:43.723449   68995 cri.go:89] found id: ""
	I0719 19:38:43.723475   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.723483   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:43.723490   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:43.723565   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:43.759683   68995 cri.go:89] found id: ""
	I0719 19:38:43.759710   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.759718   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:43.759724   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:43.759781   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:43.795127   68995 cri.go:89] found id: ""
	I0719 19:38:43.795154   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.795162   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:43.795179   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:43.795225   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:43.832493   68995 cri.go:89] found id: ""
	I0719 19:38:43.832522   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.832530   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:43.832536   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:43.832588   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:43.868651   68995 cri.go:89] found id: ""
	I0719 19:38:43.868686   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.868697   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:43.868709   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:43.868724   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:43.908054   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:43.908096   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:43.960508   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:43.960546   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:43.973793   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:43.973817   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:44.046185   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:44.046210   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:44.046224   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:46.628410   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:46.640786   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:46.640853   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:46.672921   68995 cri.go:89] found id: ""
	I0719 19:38:46.672950   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.672960   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:46.672967   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:46.673030   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:46.706817   68995 cri.go:89] found id: ""
	I0719 19:38:46.706846   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.706857   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:46.706864   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:46.706922   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:46.741130   68995 cri.go:89] found id: ""
	I0719 19:38:46.741151   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.741158   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:46.741163   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:46.741207   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:46.778639   68995 cri.go:89] found id: ""
	I0719 19:38:46.778664   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.778671   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:46.778677   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:46.778724   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:46.812605   68995 cri.go:89] found id: ""
	I0719 19:38:46.812632   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.812648   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:46.812656   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:46.812709   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:46.848459   68995 cri.go:89] found id: ""
	I0719 19:38:46.848490   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.848499   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:46.848506   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:46.848554   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:46.883444   68995 cri.go:89] found id: ""
	I0719 19:38:46.883475   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.883486   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:46.883494   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:46.883553   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:46.916932   68995 cri.go:89] found id: ""
	I0719 19:38:46.916959   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.916974   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:46.916983   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:46.916994   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:46.967243   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:46.967273   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:46.980786   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:46.980814   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 19:38:43.253266   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:45.752678   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:45.482055   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:47.482299   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:46.733205   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:49.232365   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	W0719 19:38:47.050672   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:47.050698   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:47.050713   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:47.136521   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:47.136557   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:49.673673   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:49.687071   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:49.687150   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:49.720268   68995 cri.go:89] found id: ""
	I0719 19:38:49.720296   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.720305   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:49.720313   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:49.720375   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:49.754485   68995 cri.go:89] found id: ""
	I0719 19:38:49.754512   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.754520   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:49.754527   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:49.754595   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:49.786533   68995 cri.go:89] found id: ""
	I0719 19:38:49.786557   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.786566   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:49.786573   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:49.786637   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:49.819719   68995 cri.go:89] found id: ""
	I0719 19:38:49.819749   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.819758   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:49.819764   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:49.819829   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:49.856826   68995 cri.go:89] found id: ""
	I0719 19:38:49.856859   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.856869   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:49.856876   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:49.856942   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:49.891737   68995 cri.go:89] found id: ""
	I0719 19:38:49.891763   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.891774   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:49.891781   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:49.891860   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:49.924660   68995 cri.go:89] found id: ""
	I0719 19:38:49.924690   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.924699   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:49.924704   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:49.924761   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:49.956896   68995 cri.go:89] found id: ""
	I0719 19:38:49.956925   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.956936   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:49.956946   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:49.956959   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:50.012836   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:50.012867   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:50.028269   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:50.028292   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:50.128352   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:50.128380   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:50.128392   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:50.205971   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:50.206004   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:48.252120   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:50.751581   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:49.482679   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:51.984263   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:51.732882   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:54.232300   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:56.235372   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:52.742970   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:52.756222   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:52.756294   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:52.789556   68995 cri.go:89] found id: ""
	I0719 19:38:52.789585   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.789595   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:52.789601   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:52.789660   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:52.820745   68995 cri.go:89] found id: ""
	I0719 19:38:52.820790   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.820803   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:52.820811   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:52.820878   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:52.854667   68995 cri.go:89] found id: ""
	I0719 19:38:52.854698   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.854709   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:52.854716   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:52.854775   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:52.887125   68995 cri.go:89] found id: ""
	I0719 19:38:52.887152   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.887162   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:52.887167   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:52.887215   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:52.920824   68995 cri.go:89] found id: ""
	I0719 19:38:52.920861   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.920872   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:52.920878   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:52.920941   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:52.952819   68995 cri.go:89] found id: ""
	I0719 19:38:52.952848   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.952856   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:52.952861   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:52.952915   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:52.985508   68995 cri.go:89] found id: ""
	I0719 19:38:52.985535   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.985545   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:52.985553   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:52.985617   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:53.018633   68995 cri.go:89] found id: ""
	I0719 19:38:53.018660   68995 logs.go:276] 0 containers: []
	W0719 19:38:53.018669   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:53.018678   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:53.018690   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:53.071496   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:53.071533   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:53.084685   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:53.084709   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:53.146579   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:53.146603   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:53.146617   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:53.227521   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:53.227552   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:55.768276   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:55.782991   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:55.783075   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:55.825184   68995 cri.go:89] found id: ""
	I0719 19:38:55.825206   68995 logs.go:276] 0 containers: []
	W0719 19:38:55.825214   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:55.825220   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:55.825282   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:55.872997   68995 cri.go:89] found id: ""
	I0719 19:38:55.873024   68995 logs.go:276] 0 containers: []
	W0719 19:38:55.873033   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:55.873041   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:55.873106   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:55.911478   68995 cri.go:89] found id: ""
	I0719 19:38:55.911510   68995 logs.go:276] 0 containers: []
	W0719 19:38:55.911521   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:55.911530   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:55.911654   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:55.943930   68995 cri.go:89] found id: ""
	I0719 19:38:55.943960   68995 logs.go:276] 0 containers: []
	W0719 19:38:55.943971   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:55.943978   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:55.944046   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:55.977743   68995 cri.go:89] found id: ""
	I0719 19:38:55.977771   68995 logs.go:276] 0 containers: []
	W0719 19:38:55.977781   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:55.977788   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:55.977855   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:56.014436   68995 cri.go:89] found id: ""
	I0719 19:38:56.014467   68995 logs.go:276] 0 containers: []
	W0719 19:38:56.014479   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:56.014486   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:56.014561   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:56.047209   68995 cri.go:89] found id: ""
	I0719 19:38:56.047244   68995 logs.go:276] 0 containers: []
	W0719 19:38:56.047255   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:56.047263   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:56.047348   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:56.079632   68995 cri.go:89] found id: ""
	I0719 19:38:56.079655   68995 logs.go:276] 0 containers: []
	W0719 19:38:56.079663   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:56.079671   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:56.079685   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:56.153319   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:56.153343   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:56.153356   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:56.248187   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:56.248228   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:56.291329   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:56.291354   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:56.346462   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:56.346499   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:53.251881   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:55.751678   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:54.484732   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:56.982643   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:58.733025   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:00.731479   68617 pod_ready.go:81] duration metric: took 4m0.005795927s for pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace to be "Ready" ...
	E0719 19:39:00.731502   68617 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0719 19:39:00.731509   68617 pod_ready.go:38] duration metric: took 4m4.040942986s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:39:00.731523   68617 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:39:00.731547   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:39:00.731600   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:39:00.786510   68617 cri.go:89] found id: "82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5"
	I0719 19:39:00.786531   68617 cri.go:89] found id: ""
	I0719 19:39:00.786539   68617 logs.go:276] 1 containers: [82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5]
	I0719 19:39:00.786595   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:00.791234   68617 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:39:00.791294   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:39:00.826465   68617 cri.go:89] found id: "46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c"
	I0719 19:39:00.826492   68617 cri.go:89] found id: ""
	I0719 19:39:00.826503   68617 logs.go:276] 1 containers: [46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c]
	I0719 19:39:00.826562   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:00.830418   68617 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:39:00.830481   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:39:00.865831   68617 cri.go:89] found id: "9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb"
	I0719 19:39:00.865851   68617 cri.go:89] found id: ""
	I0719 19:39:00.865859   68617 logs.go:276] 1 containers: [9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb]
	I0719 19:39:00.865903   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:00.870162   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:39:00.870209   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:39:00.912742   68617 cri.go:89] found id: "560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e"
	I0719 19:39:00.912766   68617 cri.go:89] found id: ""
	I0719 19:39:00.912775   68617 logs.go:276] 1 containers: [560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e]
	I0719 19:39:00.912829   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:00.916725   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:39:00.916777   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:39:00.951824   68617 cri.go:89] found id: "4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e"
	I0719 19:39:00.951861   68617 cri.go:89] found id: ""
	I0719 19:39:00.951871   68617 logs.go:276] 1 containers: [4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e]
	I0719 19:39:00.951920   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:00.955637   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:39:00.955702   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:39:00.990611   68617 cri.go:89] found id: "4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498"
	I0719 19:39:00.990637   68617 cri.go:89] found id: ""
	I0719 19:39:00.990645   68617 logs.go:276] 1 containers: [4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498]
	I0719 19:39:00.990690   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:00.994506   68617 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:39:00.994568   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:39:01.030959   68617 cri.go:89] found id: ""
	I0719 19:39:01.030991   68617 logs.go:276] 0 containers: []
	W0719 19:39:01.031001   68617 logs.go:278] No container was found matching "kindnet"
	I0719 19:39:01.031007   68617 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 19:39:01.031069   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 19:39:01.068897   68617 cri.go:89] found id: "aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040"
	I0719 19:39:01.068928   68617 cri.go:89] found id: "f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91"
	I0719 19:39:01.068934   68617 cri.go:89] found id: ""
	I0719 19:39:01.068945   68617 logs.go:276] 2 containers: [aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040 f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91]
	I0719 19:39:01.069006   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:01.073087   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:01.076976   68617 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:39:01.076994   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:58.861739   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:58.874774   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:58.874842   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:58.913102   68995 cri.go:89] found id: ""
	I0719 19:38:58.913130   68995 logs.go:276] 0 containers: []
	W0719 19:38:58.913138   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:58.913143   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:58.913201   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:58.949367   68995 cri.go:89] found id: ""
	I0719 19:38:58.949397   68995 logs.go:276] 0 containers: []
	W0719 19:38:58.949407   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:58.949415   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:58.949483   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:58.984133   68995 cri.go:89] found id: ""
	I0719 19:38:58.984152   68995 logs.go:276] 0 containers: []
	W0719 19:38:58.984158   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:58.984164   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:58.984224   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:59.016356   68995 cri.go:89] found id: ""
	I0719 19:38:59.016383   68995 logs.go:276] 0 containers: []
	W0719 19:38:59.016393   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:59.016401   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:59.016469   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:59.049981   68995 cri.go:89] found id: ""
	I0719 19:38:59.050009   68995 logs.go:276] 0 containers: []
	W0719 19:38:59.050017   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:59.050027   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:59.050087   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:59.082248   68995 cri.go:89] found id: ""
	I0719 19:38:59.082270   68995 logs.go:276] 0 containers: []
	W0719 19:38:59.082278   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:59.082284   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:59.082358   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:59.115980   68995 cri.go:89] found id: ""
	I0719 19:38:59.116011   68995 logs.go:276] 0 containers: []
	W0719 19:38:59.116021   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:59.116028   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:59.116102   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:59.148379   68995 cri.go:89] found id: ""
	I0719 19:38:59.148404   68995 logs.go:276] 0 containers: []
	W0719 19:38:59.148412   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:59.148420   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:59.148432   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:59.161066   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:59.161099   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:59.225628   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:59.225658   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:59.225669   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:59.311640   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:59.311682   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:59.360964   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:59.360997   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:39:01.915091   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:39:01.928928   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:39:01.928987   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:39:01.962436   68995 cri.go:89] found id: ""
	I0719 19:39:01.962472   68995 logs.go:276] 0 containers: []
	W0719 19:39:01.962483   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:39:01.962490   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:39:01.962559   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:58.251636   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:00.752163   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:58.983433   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:01.482798   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:01.626674   68617 logs.go:123] Gathering logs for kube-apiserver [82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5] ...
	I0719 19:39:01.626731   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5"
	I0719 19:39:01.670756   68617 logs.go:123] Gathering logs for coredns [9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb] ...
	I0719 19:39:01.670787   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb"
	I0719 19:39:01.706541   68617 logs.go:123] Gathering logs for kube-controller-manager [4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498] ...
	I0719 19:39:01.706578   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498"
	I0719 19:39:01.756534   68617 logs.go:123] Gathering logs for etcd [46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c] ...
	I0719 19:39:01.756570   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c"
	I0719 19:39:01.805577   68617 logs.go:123] Gathering logs for kube-scheduler [560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e] ...
	I0719 19:39:01.805613   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e"
	I0719 19:39:01.839258   68617 logs.go:123] Gathering logs for kube-proxy [4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e] ...
	I0719 19:39:01.839288   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e"
	I0719 19:39:01.878371   68617 logs.go:123] Gathering logs for storage-provisioner [aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040] ...
	I0719 19:39:01.878404   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040"
	I0719 19:39:01.917151   68617 logs.go:123] Gathering logs for storage-provisioner [f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91] ...
	I0719 19:39:01.917183   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91"
	I0719 19:39:01.956111   68617 logs.go:123] Gathering logs for kubelet ...
	I0719 19:39:01.956149   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:39:02.023029   68617 logs.go:123] Gathering logs for dmesg ...
	I0719 19:39:02.023081   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:39:02.040889   68617 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:39:02.040920   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 19:39:02.188194   68617 logs.go:123] Gathering logs for container status ...
	I0719 19:39:02.188226   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:39:04.739149   68617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:39:04.757462   68617 api_server.go:72] duration metric: took 4m15.770324413s to wait for apiserver process to appear ...
	I0719 19:39:04.757482   68617 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:39:04.757512   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:39:04.757557   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:39:04.796517   68617 cri.go:89] found id: "82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5"
	I0719 19:39:04.796543   68617 cri.go:89] found id: ""
	I0719 19:39:04.796553   68617 logs.go:276] 1 containers: [82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5]
	I0719 19:39:04.796610   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:04.801976   68617 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:39:04.802046   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:39:04.842403   68617 cri.go:89] found id: "46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c"
	I0719 19:39:04.842425   68617 cri.go:89] found id: ""
	I0719 19:39:04.842434   68617 logs.go:276] 1 containers: [46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c]
	I0719 19:39:04.842487   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:04.846620   68617 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:39:04.846694   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:39:04.885318   68617 cri.go:89] found id: "9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb"
	I0719 19:39:04.885343   68617 cri.go:89] found id: ""
	I0719 19:39:04.885353   68617 logs.go:276] 1 containers: [9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb]
	I0719 19:39:04.885411   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:04.889813   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:39:04.889883   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:39:04.928174   68617 cri.go:89] found id: "560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e"
	I0719 19:39:04.928201   68617 cri.go:89] found id: ""
	I0719 19:39:04.928211   68617 logs.go:276] 1 containers: [560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e]
	I0719 19:39:04.928278   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:04.932724   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:39:04.932807   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:39:04.970800   68617 cri.go:89] found id: "4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e"
	I0719 19:39:04.970823   68617 cri.go:89] found id: ""
	I0719 19:39:04.970830   68617 logs.go:276] 1 containers: [4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e]
	I0719 19:39:04.970882   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:04.975294   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:39:04.975369   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:39:05.012666   68617 cri.go:89] found id: "4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498"
	I0719 19:39:05.012690   68617 cri.go:89] found id: ""
	I0719 19:39:05.012699   68617 logs.go:276] 1 containers: [4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498]
	I0719 19:39:05.012766   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:05.016841   68617 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:39:05.016914   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:39:05.061117   68617 cri.go:89] found id: ""
	I0719 19:39:05.061141   68617 logs.go:276] 0 containers: []
	W0719 19:39:05.061151   68617 logs.go:278] No container was found matching "kindnet"
	I0719 19:39:05.061157   68617 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 19:39:05.061216   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 19:39:05.102934   68617 cri.go:89] found id: "aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040"
	I0719 19:39:05.102959   68617 cri.go:89] found id: "f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91"
	I0719 19:39:05.102965   68617 cri.go:89] found id: ""
	I0719 19:39:05.102974   68617 logs.go:276] 2 containers: [aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040 f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91]
	I0719 19:39:05.103029   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:05.107192   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:05.110970   68617 logs.go:123] Gathering logs for kube-scheduler [560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e] ...
	I0719 19:39:05.110990   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e"
	I0719 19:39:05.152862   68617 logs.go:123] Gathering logs for storage-provisioner [aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040] ...
	I0719 19:39:05.152900   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040"
	I0719 19:39:05.194080   68617 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:39:05.194110   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:39:05.657623   68617 logs.go:123] Gathering logs for container status ...
	I0719 19:39:05.657659   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:39:05.703773   68617 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:39:05.703802   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 19:39:05.810811   68617 logs.go:123] Gathering logs for etcd [46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c] ...
	I0719 19:39:05.810842   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c"
	I0719 19:39:05.859793   68617 logs.go:123] Gathering logs for kube-apiserver [82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5] ...
	I0719 19:39:05.859850   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5"
	I0719 19:39:05.911578   68617 logs.go:123] Gathering logs for coredns [9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb] ...
	I0719 19:39:05.911606   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb"
	I0719 19:39:05.947358   68617 logs.go:123] Gathering logs for kube-proxy [4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e] ...
	I0719 19:39:05.947384   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e"
	I0719 19:39:05.990731   68617 logs.go:123] Gathering logs for kube-controller-manager [4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498] ...
	I0719 19:39:05.990761   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498"
	I0719 19:39:06.041223   68617 logs.go:123] Gathering logs for storage-provisioner [f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91] ...
	I0719 19:39:06.041253   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91"
	I0719 19:39:06.076037   68617 logs.go:123] Gathering logs for kubelet ...
	I0719 19:39:06.076068   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:39:06.131177   68617 logs.go:123] Gathering logs for dmesg ...
	I0719 19:39:06.131213   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:39:02.003045   68995 cri.go:89] found id: ""
	I0719 19:39:02.003078   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.003090   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:39:02.003098   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:39:02.003160   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:39:02.041835   68995 cri.go:89] found id: ""
	I0719 19:39:02.041863   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.041880   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:39:02.041888   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:39:02.041954   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:39:02.080285   68995 cri.go:89] found id: ""
	I0719 19:39:02.080340   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.080353   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:39:02.080361   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:39:02.080431   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:39:02.116225   68995 cri.go:89] found id: ""
	I0719 19:39:02.116308   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.116323   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:39:02.116331   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:39:02.116390   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:39:02.154818   68995 cri.go:89] found id: ""
	I0719 19:39:02.154848   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.154860   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:39:02.154868   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:39:02.154931   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:39:02.190689   68995 cri.go:89] found id: ""
	I0719 19:39:02.190715   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.190727   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:39:02.190741   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:39:02.190810   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:39:02.224155   68995 cri.go:89] found id: ""
	I0719 19:39:02.224185   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.224196   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:39:02.224207   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:39:02.224221   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:39:02.295623   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:39:02.295649   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:39:02.295665   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:39:02.378354   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:39:02.378390   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:39:02.416194   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:39:02.416222   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:39:02.470997   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:39:02.471039   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:39:04.985908   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:39:04.998485   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:39:04.998556   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:39:05.032776   68995 cri.go:89] found id: ""
	I0719 19:39:05.032801   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.032812   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:39:05.032819   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:39:05.032878   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:39:05.066650   68995 cri.go:89] found id: ""
	I0719 19:39:05.066677   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.066686   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:39:05.066693   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:39:05.066771   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:39:05.103689   68995 cri.go:89] found id: ""
	I0719 19:39:05.103712   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.103720   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:39:05.103728   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:39:05.103786   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:39:05.141739   68995 cri.go:89] found id: ""
	I0719 19:39:05.141767   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.141778   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:39:05.141785   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:39:05.141844   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:39:05.178535   68995 cri.go:89] found id: ""
	I0719 19:39:05.178566   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.178576   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:39:05.178582   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:39:05.178640   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:39:05.213876   68995 cri.go:89] found id: ""
	I0719 19:39:05.213904   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.213914   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:39:05.213921   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:39:05.213980   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:39:05.255039   68995 cri.go:89] found id: ""
	I0719 19:39:05.255063   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.255070   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:39:05.255076   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:39:05.255131   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:39:05.288508   68995 cri.go:89] found id: ""
	I0719 19:39:05.288536   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.288547   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:39:05.288558   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:39:05.288572   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:39:05.301079   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:39:05.301109   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:39:05.368109   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:39:05.368128   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:39:05.368140   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:39:05.453326   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:39:05.453360   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:39:05.494780   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:39:05.494814   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:39:03.253971   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:05.751964   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:03.983599   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:06.482412   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:08.646117   68617 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0719 19:39:08.651440   68617 api_server.go:279] https://192.168.61.107:8443/healthz returned 200:
	ok
	I0719 19:39:08.652517   68617 api_server.go:141] control plane version: v1.30.3
	I0719 19:39:08.652538   68617 api_server.go:131] duration metric: took 3.895047319s to wait for apiserver health ...
	I0719 19:39:08.652545   68617 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:39:08.652568   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:39:08.652629   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:39:08.688312   68617 cri.go:89] found id: "82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5"
	I0719 19:39:08.688356   68617 cri.go:89] found id: ""
	I0719 19:39:08.688366   68617 logs.go:276] 1 containers: [82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5]
	I0719 19:39:08.688424   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.692993   68617 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:39:08.693046   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:39:08.736900   68617 cri.go:89] found id: "46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c"
	I0719 19:39:08.736920   68617 cri.go:89] found id: ""
	I0719 19:39:08.736927   68617 logs.go:276] 1 containers: [46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c]
	I0719 19:39:08.736981   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.741409   68617 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:39:08.741473   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:39:08.780867   68617 cri.go:89] found id: "9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb"
	I0719 19:39:08.780885   68617 cri.go:89] found id: ""
	I0719 19:39:08.780892   68617 logs.go:276] 1 containers: [9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb]
	I0719 19:39:08.780935   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.785161   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:39:08.785233   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:39:08.818139   68617 cri.go:89] found id: "560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e"
	I0719 19:39:08.818161   68617 cri.go:89] found id: ""
	I0719 19:39:08.818170   68617 logs.go:276] 1 containers: [560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e]
	I0719 19:39:08.818225   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.822066   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:39:08.822116   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:39:08.855310   68617 cri.go:89] found id: "4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e"
	I0719 19:39:08.855333   68617 cri.go:89] found id: ""
	I0719 19:39:08.855344   68617 logs.go:276] 1 containers: [4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e]
	I0719 19:39:08.855402   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.859647   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:39:08.859716   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:39:08.897948   68617 cri.go:89] found id: "4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498"
	I0719 19:39:08.897973   68617 cri.go:89] found id: ""
	I0719 19:39:08.897981   68617 logs.go:276] 1 containers: [4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498]
	I0719 19:39:08.898030   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.901861   68617 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:39:08.901909   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:39:08.940408   68617 cri.go:89] found id: ""
	I0719 19:39:08.940440   68617 logs.go:276] 0 containers: []
	W0719 19:39:08.940451   68617 logs.go:278] No container was found matching "kindnet"
	I0719 19:39:08.940458   68617 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 19:39:08.940516   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 19:39:08.979077   68617 cri.go:89] found id: "aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040"
	I0719 19:39:08.979104   68617 cri.go:89] found id: "f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91"
	I0719 19:39:08.979109   68617 cri.go:89] found id: ""
	I0719 19:39:08.979118   68617 logs.go:276] 2 containers: [aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040 f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91]
	I0719 19:39:08.979185   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.983947   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.987604   68617 logs.go:123] Gathering logs for coredns [9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb] ...
	I0719 19:39:08.987624   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb"
	I0719 19:39:09.021803   68617 logs.go:123] Gathering logs for kube-proxy [4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e] ...
	I0719 19:39:09.021834   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e"
	I0719 19:39:09.064650   68617 logs.go:123] Gathering logs for kube-controller-manager [4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498] ...
	I0719 19:39:09.064677   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498"
	I0719 19:39:09.120779   68617 logs.go:123] Gathering logs for storage-provisioner [f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91] ...
	I0719 19:39:09.120834   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91"
	I0719 19:39:09.155898   68617 logs.go:123] Gathering logs for kubelet ...
	I0719 19:39:09.155926   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:39:09.210461   68617 logs.go:123] Gathering logs for dmesg ...
	I0719 19:39:09.210490   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:39:09.224154   68617 logs.go:123] Gathering logs for kube-apiserver [82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5] ...
	I0719 19:39:09.224181   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5"
	I0719 19:39:09.266941   68617 logs.go:123] Gathering logs for storage-provisioner [aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040] ...
	I0719 19:39:09.266974   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040"
	I0719 19:39:09.306982   68617 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:39:09.307011   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:39:09.689805   68617 logs.go:123] Gathering logs for container status ...
	I0719 19:39:09.689848   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:39:09.733995   68617 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:39:09.734034   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 19:39:09.842306   68617 logs.go:123] Gathering logs for etcd [46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c] ...
	I0719 19:39:09.842339   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c"
	I0719 19:39:09.889811   68617 logs.go:123] Gathering logs for kube-scheduler [560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e] ...
	I0719 19:39:09.889842   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e"
	I0719 19:39:08.056065   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:39:08.070370   68995 kubeadm.go:597] duration metric: took 4m4.520130366s to restartPrimaryControlPlane
	W0719 19:39:08.070441   68995 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 19:39:08.070473   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 19:39:12.432441   68617 system_pods.go:59] 8 kube-system pods found
	I0719 19:39:12.432467   68617 system_pods.go:61] "coredns-7db6d8ff4d-7h48w" [dd691a13-e571-4148-bef0-ba0552277312] Running
	I0719 19:39:12.432472   68617 system_pods.go:61] "etcd-embed-certs-889918" [68eae3c2-11de-4c70-bc28-1f34e1e868fc] Running
	I0719 19:39:12.432476   68617 system_pods.go:61] "kube-apiserver-embed-certs-889918" [48f50846-bc9e-444e-a569-98d28550dcff] Running
	I0719 19:39:12.432480   68617 system_pods.go:61] "kube-controller-manager-embed-certs-889918" [92e4da69-acef-44a2-9301-726c70294727] Running
	I0719 19:39:12.432484   68617 system_pods.go:61] "kube-proxy-5fcgs" [67ef345b-f2ef-4e18-a55c-91e2c08a37b8] Running
	I0719 19:39:12.432487   68617 system_pods.go:61] "kube-scheduler-embed-certs-889918" [f5d74873-1493-42f7-92a7-5d2b98929025] Running
	I0719 19:39:12.432493   68617 system_pods.go:61] "metrics-server-569cc877fc-sfc85" [fc2a1efe-1728-42d9-bd3d-9eb29af783e9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:39:12.432498   68617 system_pods.go:61] "storage-provisioner" [32274055-6a2d-4cf9-8ae8-3c5f7cdc66db] Running
	I0719 19:39:12.432503   68617 system_pods.go:74] duration metric: took 3.779953026s to wait for pod list to return data ...
	I0719 19:39:12.432509   68617 default_sa.go:34] waiting for default service account to be created ...
	I0719 19:39:12.434809   68617 default_sa.go:45] found service account: "default"
	I0719 19:39:12.434826   68617 default_sa.go:55] duration metric: took 2.3111ms for default service account to be created ...
	I0719 19:39:12.434833   68617 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 19:39:12.440652   68617 system_pods.go:86] 8 kube-system pods found
	I0719 19:39:12.440673   68617 system_pods.go:89] "coredns-7db6d8ff4d-7h48w" [dd691a13-e571-4148-bef0-ba0552277312] Running
	I0719 19:39:12.440678   68617 system_pods.go:89] "etcd-embed-certs-889918" [68eae3c2-11de-4c70-bc28-1f34e1e868fc] Running
	I0719 19:39:12.440683   68617 system_pods.go:89] "kube-apiserver-embed-certs-889918" [48f50846-bc9e-444e-a569-98d28550dcff] Running
	I0719 19:39:12.440689   68617 system_pods.go:89] "kube-controller-manager-embed-certs-889918" [92e4da69-acef-44a2-9301-726c70294727] Running
	I0719 19:39:12.440696   68617 system_pods.go:89] "kube-proxy-5fcgs" [67ef345b-f2ef-4e18-a55c-91e2c08a37b8] Running
	I0719 19:39:12.440707   68617 system_pods.go:89] "kube-scheduler-embed-certs-889918" [f5d74873-1493-42f7-92a7-5d2b98929025] Running
	I0719 19:39:12.440717   68617 system_pods.go:89] "metrics-server-569cc877fc-sfc85" [fc2a1efe-1728-42d9-bd3d-9eb29af783e9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:39:12.440728   68617 system_pods.go:89] "storage-provisioner" [32274055-6a2d-4cf9-8ae8-3c5f7cdc66db] Running
	I0719 19:39:12.440737   68617 system_pods.go:126] duration metric: took 5.899051ms to wait for k8s-apps to be running ...
	I0719 19:39:12.440748   68617 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 19:39:12.440789   68617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:39:12.459186   68617 system_svc.go:56] duration metric: took 18.42941ms WaitForService to wait for kubelet
	I0719 19:39:12.459214   68617 kubeadm.go:582] duration metric: took 4m23.472080305s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 19:39:12.459237   68617 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:39:12.462522   68617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:39:12.462542   68617 node_conditions.go:123] node cpu capacity is 2
	I0719 19:39:12.462552   68617 node_conditions.go:105] duration metric: took 3.31009ms to run NodePressure ...
	I0719 19:39:12.462568   68617 start.go:241] waiting for startup goroutines ...
	I0719 19:39:12.462577   68617 start.go:246] waiting for cluster config update ...
	I0719 19:39:12.462590   68617 start.go:255] writing updated cluster config ...
	I0719 19:39:12.462907   68617 ssh_runner.go:195] Run: rm -f paused
	I0719 19:39:12.529076   68617 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 19:39:12.531440   68617 out.go:177] * Done! kubectl is now configured to use "embed-certs-889918" cluster and "default" namespace by default
	I0719 19:39:08.251071   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:10.251424   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:12.252078   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:12.404491   68995 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.333989512s)
	I0719 19:39:12.404566   68995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:39:12.418839   68995 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:39:12.430297   68995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:39:12.440268   68995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:39:12.440287   68995 kubeadm.go:157] found existing configuration files:
	
	I0719 19:39:12.440335   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:39:12.449116   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:39:12.449184   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:39:12.460572   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:39:12.471305   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:39:12.471362   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:39:12.482167   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:39:12.493823   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:39:12.493905   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:39:12.505544   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:39:12.517178   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:39:12.517240   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:39:12.529271   68995 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 19:39:12.620678   68995 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0719 19:39:12.620745   68995 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 19:39:12.758666   68995 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 19:39:12.758855   68995 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 19:39:12.759023   68995 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 19:39:12.939168   68995 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 19:39:08.482667   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:10.983631   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:12.941111   68995 out.go:204]   - Generating certificates and keys ...
	I0719 19:39:12.941226   68995 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 19:39:12.941319   68995 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 19:39:12.941423   68995 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 19:39:12.941537   68995 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 19:39:12.941651   68995 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 19:39:12.941754   68995 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 19:39:12.941851   68995 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 19:39:12.941935   68995 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 19:39:12.942048   68995 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 19:39:12.942156   68995 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 19:39:12.942238   68995 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 19:39:12.942336   68995 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 19:39:13.024175   68995 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 19:39:13.148395   68995 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 19:39:13.342824   68995 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 19:39:13.440755   68995 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 19:39:13.455869   68995 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 19:39:13.456830   68995 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 19:39:13.456907   68995 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 19:39:13.600063   68995 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 19:39:13.602239   68995 out.go:204]   - Booting up control plane ...
	I0719 19:39:13.602381   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 19:39:13.608875   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 19:39:13.609791   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 19:39:13.610664   68995 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 19:39:13.612681   68995 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 19:39:14.252185   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:16.252337   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:13.484347   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:15.983299   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:17.983517   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:18.751634   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:21.252050   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:20.482791   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:21.477050   68536 pod_ready.go:81] duration metric: took 4m0.000605179s for pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace to be "Ready" ...
	E0719 19:39:21.477076   68536 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace to be "Ready" (will not retry!)
	I0719 19:39:21.477094   68536 pod_ready.go:38] duration metric: took 4m14.545379745s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:39:21.477123   68536 kubeadm.go:597] duration metric: took 5m1.629449662s to restartPrimaryControlPlane
	W0719 19:39:21.477193   68536 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 19:39:21.477215   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 19:39:23.753986   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:26.251761   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:28.252580   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:30.755898   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:33.251879   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:35.750822   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:37.752455   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:40.251762   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:42.752421   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:45.251253   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:47.752059   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:50.253519   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:51.745969   68019 pod_ready.go:81] duration metric: took 4m0.000948888s for pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace to be "Ready" ...
	E0719 19:39:51.746010   68019 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0719 19:39:51.746030   68019 pod_ready.go:38] duration metric: took 4m12.537161917s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:39:51.746056   68019 kubeadm.go:597] duration metric: took 4m19.596455979s to restartPrimaryControlPlane
	W0719 19:39:51.746106   68019 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 19:39:51.746131   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 19:39:52.520817   68536 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.043581976s)
	I0719 19:39:52.520887   68536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:39:52.536043   68536 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:39:52.545993   68536 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:39:52.555648   68536 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:39:52.555669   68536 kubeadm.go:157] found existing configuration files:
	
	I0719 19:39:52.555711   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0719 19:39:52.564602   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:39:52.564712   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:39:52.573899   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0719 19:39:52.582537   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:39:52.582588   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:39:52.591303   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0719 19:39:52.600820   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:39:52.600868   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:39:52.610312   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0719 19:39:52.618924   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:39:52.618970   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:39:52.627840   68536 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 19:39:52.683442   68536 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0719 19:39:52.683530   68536 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 19:39:52.826305   68536 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 19:39:52.826452   68536 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 19:39:52.826620   68536 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 19:39:53.042595   68536 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 19:39:53.044702   68536 out.go:204]   - Generating certificates and keys ...
	I0719 19:39:53.044784   68536 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 19:39:53.044870   68536 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 19:39:53.044980   68536 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 19:39:53.045077   68536 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 19:39:53.045156   68536 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 19:39:53.045204   68536 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 19:39:53.045263   68536 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 19:39:53.045340   68536 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 19:39:53.045435   68536 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 19:39:53.045541   68536 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 19:39:53.045606   68536 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 19:39:53.045764   68536 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 19:39:53.222737   68536 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 19:39:53.489072   68536 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 19:39:53.740066   68536 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 19:39:53.822505   68536 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 19:39:53.971335   68536 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 19:39:53.971975   68536 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 19:39:53.976245   68536 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 19:39:53.614108   68995 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0719 19:39:53.614700   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:39:53.614946   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:39:53.978300   68536 out.go:204]   - Booting up control plane ...
	I0719 19:39:53.978437   68536 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 19:39:53.978565   68536 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 19:39:53.978663   68536 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 19:39:53.996201   68536 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 19:39:53.997047   68536 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 19:39:53.997148   68536 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 19:39:54.130113   68536 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 19:39:54.130245   68536 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 19:39:55.132570   68536 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002320915s
	I0719 19:39:55.132686   68536 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 19:39:59.636140   68536 kubeadm.go:310] [api-check] The API server is healthy after 4.50221298s
	I0719 19:39:59.654383   68536 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 19:39:59.676263   68536 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 19:39:59.700938   68536 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 19:39:59.701195   68536 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-578533 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 19:39:59.712009   68536 kubeadm.go:310] [bootstrap-token] Using token: kew56y.gotqvfalggpgzbky
	I0719 19:39:59.713486   68536 out.go:204]   - Configuring RBAC rules ...
	I0719 19:39:59.713629   68536 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 19:39:59.720605   68536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 19:39:59.726901   68536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 19:39:59.730021   68536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 19:39:59.733042   68536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 19:39:59.736260   68536 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 19:40:00.042122   68536 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 19:40:00.496191   68536 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 19:40:01.043927   68536 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 19:40:01.043969   68536 kubeadm.go:310] 
	I0719 19:40:01.044072   68536 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 19:40:01.044086   68536 kubeadm.go:310] 
	I0719 19:40:01.044173   68536 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 19:40:01.044181   68536 kubeadm.go:310] 
	I0719 19:40:01.044201   68536 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 19:40:01.044250   68536 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 19:40:01.044339   68536 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 19:40:01.044356   68536 kubeadm.go:310] 
	I0719 19:40:01.044493   68536 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 19:40:01.044513   68536 kubeadm.go:310] 
	I0719 19:40:01.044584   68536 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 19:40:01.044594   68536 kubeadm.go:310] 
	I0719 19:40:01.044670   68536 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 19:40:01.044774   68536 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 19:40:01.044874   68536 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 19:40:01.044888   68536 kubeadm.go:310] 
	I0719 19:40:01.044975   68536 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 19:40:01.045047   68536 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 19:40:01.045058   68536 kubeadm.go:310] 
	I0719 19:40:01.045169   68536 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token kew56y.gotqvfalggpgzbky \
	I0719 19:40:01.045331   68536 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 \
	I0719 19:40:01.045367   68536 kubeadm.go:310] 	--control-plane 
	I0719 19:40:01.045372   68536 kubeadm.go:310] 
	I0719 19:40:01.045496   68536 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 19:40:01.045506   68536 kubeadm.go:310] 
	I0719 19:40:01.045607   68536 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token kew56y.gotqvfalggpgzbky \
	I0719 19:40:01.045766   68536 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 
	I0719 19:40:01.045950   68536 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 19:40:01.045974   68536 cni.go:84] Creating CNI manager for ""
	I0719 19:40:01.045983   68536 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:40:01.047577   68536 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 19:39:58.615471   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:39:58.615687   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:40:01.048902   68536 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 19:40:01.061165   68536 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 19:40:01.083745   68536 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 19:40:01.083880   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:01.083893   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-578533 minikube.k8s.io/updated_at=2024_07_19T19_40_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221 minikube.k8s.io/name=default-k8s-diff-port-578533 minikube.k8s.io/primary=true
	I0719 19:40:01.115489   68536 ops.go:34] apiserver oom_adj: -16
	I0719 19:40:01.288776   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:01.789023   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:02.289623   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:02.789147   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:03.289112   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:03.789167   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:04.289092   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:04.788811   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:05.289732   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:05.789486   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:06.289062   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:06.789641   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:07.289713   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:07.789100   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:08.289752   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:08.616294   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:40:08.616549   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:40:08.789060   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:09.288783   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:09.789011   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:10.289587   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:10.789028   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:11.288947   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:11.789756   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:12.288867   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:12.789023   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:13.289682   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:13.789104   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:13.872731   68536 kubeadm.go:1113] duration metric: took 12.788952534s to wait for elevateKubeSystemPrivileges
	I0719 19:40:13.872760   68536 kubeadm.go:394] duration metric: took 5m54.069849946s to StartCluster
	I0719 19:40:13.872777   68536 settings.go:142] acquiring lock: {Name:mkcf95271f2ac91a55c2f4ae0f8a0ccaa24777be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:40:13.872848   68536 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:40:13.874598   68536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/kubeconfig: {Name:mk0bbb5f0de0e816a151363ad4427bbf9de52f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:40:13.874875   68536 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.104 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 19:40:13.875049   68536 config.go:182] Loaded profile config "default-k8s-diff-port-578533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:40:13.875047   68536 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 19:40:13.875107   68536 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-578533"
	I0719 19:40:13.875116   68536 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-578533"
	I0719 19:40:13.875140   68536 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-578533"
	I0719 19:40:13.875142   68536 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-578533"
	I0719 19:40:13.875198   68536 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-578533"
	W0719 19:40:13.875211   68536 addons.go:243] addon metrics-server should already be in state true
	I0719 19:40:13.875141   68536 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-578533"
	W0719 19:40:13.875237   68536 addons.go:243] addon storage-provisioner should already be in state true
	I0719 19:40:13.875252   68536 host.go:66] Checking if "default-k8s-diff-port-578533" exists ...
	I0719 19:40:13.875272   68536 host.go:66] Checking if "default-k8s-diff-port-578533" exists ...
	I0719 19:40:13.875610   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.875620   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.875641   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.875648   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.875760   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.875808   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.876607   68536 out.go:177] * Verifying Kubernetes components...
	I0719 19:40:13.878074   68536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:40:13.891861   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44441
	I0719 19:40:13.891871   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33215
	I0719 19:40:13.892275   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.892348   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.892756   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.892776   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.892861   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.892880   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.893123   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.893188   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.893600   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.893620   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.893708   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.893741   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.895892   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33737
	I0719 19:40:13.896304   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.896825   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.896849   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.897241   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.897455   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetState
	I0719 19:40:13.901471   68536 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-578533"
	W0719 19:40:13.901492   68536 addons.go:243] addon default-storageclass should already be in state true
	I0719 19:40:13.901520   68536 host.go:66] Checking if "default-k8s-diff-port-578533" exists ...
	I0719 19:40:13.901886   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.901916   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.909214   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44955
	I0719 19:40:13.909602   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.909706   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41381
	I0719 19:40:13.910014   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.910034   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.910088   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.910307   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.910493   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetState
	I0719 19:40:13.910626   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.910647   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.911481   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.911755   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetState
	I0719 19:40:13.913629   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:40:13.914010   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:40:13.915656   68536 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0719 19:40:13.915658   68536 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:40:13.916749   68536 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 19:40:13.916766   68536 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 19:40:13.916796   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:40:13.917023   68536 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:40:13.917042   68536 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 19:40:13.917059   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:40:13.920356   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:40:13.921022   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:40:13.921051   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:40:13.921100   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:40:13.921342   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:40:13.921505   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:40:13.921690   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:40:13.921838   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:40:13.922025   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:40:13.922047   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:40:13.922369   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:40:13.922515   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:40:13.922679   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:40:13.922815   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:40:13.926929   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43701
	I0719 19:40:13.927323   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.927751   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.927771   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.928249   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.928833   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.928858   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.944683   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33103
	I0719 19:40:13.945111   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.945633   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.945660   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.946161   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.946386   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetState
	I0719 19:40:13.948243   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:40:13.948454   68536 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 19:40:13.948469   68536 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 19:40:13.948490   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:40:13.951289   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:40:13.951582   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:40:13.951617   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:40:13.951803   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:40:13.952029   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:40:13.952201   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:40:13.952391   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:40:14.070828   68536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:40:14.091145   68536 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-578533" to be "Ready" ...
	I0719 19:40:14.110080   68536 node_ready.go:49] node "default-k8s-diff-port-578533" has status "Ready":"True"
	I0719 19:40:14.110107   68536 node_ready.go:38] duration metric: took 18.918971ms for node "default-k8s-diff-port-578533" to be "Ready" ...
	I0719 19:40:14.110119   68536 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:40:14.119802   68536 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.126113   68536 pod_ready.go:92] pod "etcd-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:14.126142   68536 pod_ready.go:81] duration metric: took 6.313891ms for pod "etcd-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.126155   68536 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.141789   68536 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:14.141819   68536 pod_ready.go:81] duration metric: took 15.656544ms for pod "kube-apiserver-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.141834   68536 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.156519   68536 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:14.156551   68536 pod_ready.go:81] duration metric: took 14.707971ms for pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.156565   68536 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xd6jl" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.204678   68536 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 19:40:14.204699   68536 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0719 19:40:14.243288   68536 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 19:40:14.245353   68536 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 19:40:14.245379   68536 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 19:40:14.251097   68536 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:40:14.282349   68536 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 19:40:14.282380   68536 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 19:40:14.324904   68536 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 19:40:14.538669   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:14.538706   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:14.539085   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Closing plugin on server side
	I0719 19:40:14.539097   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:14.539112   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:14.539127   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:14.539141   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:14.539429   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:14.539450   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:14.539487   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Closing plugin on server side
	I0719 19:40:14.544896   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:14.544916   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:14.545172   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Closing plugin on server side
	I0719 19:40:14.545177   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:14.545191   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:15.063896   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:15.063917   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:15.064241   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:15.064253   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Closing plugin on server side
	I0719 19:40:15.064259   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:15.064276   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:15.064284   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:15.064494   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:15.064517   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:15.658117   68536 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.333159998s)
	I0719 19:40:15.658174   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:15.658189   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:15.658470   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:15.658491   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:15.658502   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:15.658512   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:15.658736   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:15.658753   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:15.658764   68536 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-578533"
	I0719 19:40:15.661136   68536 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0719 19:40:15.662344   68536 addons.go:510] duration metric: took 1.787285769s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0719 19:40:15.679066   68536 pod_ready.go:92] pod "kube-proxy-xd6jl" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:15.679088   68536 pod_ready.go:81] duration metric: took 1.52251532s for pod "kube-proxy-xd6jl" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:15.679114   68536 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:15.695199   68536 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:15.695221   68536 pod_ready.go:81] duration metric: took 16.099506ms for pod "kube-scheduler-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:15.695229   68536 pod_ready.go:38] duration metric: took 1.585098776s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:40:15.695241   68536 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:40:15.695297   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:40:15.746226   68536 api_server.go:72] duration metric: took 1.871305968s to wait for apiserver process to appear ...
	I0719 19:40:15.746254   68536 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:40:15.746277   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:40:15.761521   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 200:
	ok
	I0719 19:40:15.762619   68536 api_server.go:141] control plane version: v1.30.3
	I0719 19:40:15.762646   68536 api_server.go:131] duration metric: took 16.383582ms to wait for apiserver health ...
	I0719 19:40:15.762656   68536 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:40:15.900579   68536 system_pods.go:59] 9 kube-system pods found
	I0719 19:40:15.900627   68536 system_pods.go:61] "coredns-7db6d8ff4d-6qndw" [2b80cbbf-e1de-449d-affb-90c7e626e11a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 19:40:15.900639   68536 system_pods.go:61] "coredns-7db6d8ff4d-gmljj" [e3c23a75-7c98-4704-b159-465a46e71deb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 19:40:15.900648   68536 system_pods.go:61] "etcd-default-k8s-diff-port-578533" [2667fb3b-07c7-4feb-9c47-aec61d610319] Running
	I0719 19:40:15.900657   68536 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-578533" [54d5ed69-0572-4d75-b2c2-868a2d48f048] Running
	I0719 19:40:15.900664   68536 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-578533" [ba65a51a-7ce6-4ba9-ad87-b2763c49732f] Running
	I0719 19:40:15.900669   68536 system_pods.go:61] "kube-proxy-xd6jl" [699a18fd-c6e6-4c85-9799-0630bb7932b9] Running
	I0719 19:40:15.900675   68536 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-578533" [28fe9666-6e36-479a-9908-0ed40fbe9655] Running
	I0719 19:40:15.900687   68536 system_pods.go:61] "metrics-server-569cc877fc-kz2bz" [dc109926-e072-47e1-944c-a9614741fdaa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:40:15.900701   68536 system_pods.go:61] "storage-provisioner" [732e14b5-3120-491f-be0f-2d2ffcd7a799] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 19:40:15.900711   68536 system_pods.go:74] duration metric: took 138.04766ms to wait for pod list to return data ...
	I0719 19:40:15.900726   68536 default_sa.go:34] waiting for default service account to be created ...
	I0719 19:40:16.094864   68536 default_sa.go:45] found service account: "default"
	I0719 19:40:16.094889   68536 default_sa.go:55] duration metric: took 194.152947ms for default service account to be created ...
	I0719 19:40:16.094899   68536 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 19:40:16.297477   68536 system_pods.go:86] 9 kube-system pods found
	I0719 19:40:16.297508   68536 system_pods.go:89] "coredns-7db6d8ff4d-6qndw" [2b80cbbf-e1de-449d-affb-90c7e626e11a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 19:40:16.297515   68536 system_pods.go:89] "coredns-7db6d8ff4d-gmljj" [e3c23a75-7c98-4704-b159-465a46e71deb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 19:40:16.297521   68536 system_pods.go:89] "etcd-default-k8s-diff-port-578533" [2667fb3b-07c7-4feb-9c47-aec61d610319] Running
	I0719 19:40:16.297526   68536 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-578533" [54d5ed69-0572-4d75-b2c2-868a2d48f048] Running
	I0719 19:40:16.297531   68536 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-578533" [ba65a51a-7ce6-4ba9-ad87-b2763c49732f] Running
	I0719 19:40:16.297535   68536 system_pods.go:89] "kube-proxy-xd6jl" [699a18fd-c6e6-4c85-9799-0630bb7932b9] Running
	I0719 19:40:16.297539   68536 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-578533" [28fe9666-6e36-479a-9908-0ed40fbe9655] Running
	I0719 19:40:16.297543   68536 system_pods.go:89] "metrics-server-569cc877fc-kz2bz" [dc109926-e072-47e1-944c-a9614741fdaa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:40:16.297550   68536 system_pods.go:89] "storage-provisioner" [732e14b5-3120-491f-be0f-2d2ffcd7a799] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 19:40:16.297572   68536 retry.go:31] will retry after 294.572454ms: missing components: kube-dns
	I0719 19:40:16.599274   68536 system_pods.go:86] 9 kube-system pods found
	I0719 19:40:16.599306   68536 system_pods.go:89] "coredns-7db6d8ff4d-6qndw" [2b80cbbf-e1de-449d-affb-90c7e626e11a] Running
	I0719 19:40:16.599315   68536 system_pods.go:89] "coredns-7db6d8ff4d-gmljj" [e3c23a75-7c98-4704-b159-465a46e71deb] Running
	I0719 19:40:16.599343   68536 system_pods.go:89] "etcd-default-k8s-diff-port-578533" [2667fb3b-07c7-4feb-9c47-aec61d610319] Running
	I0719 19:40:16.599351   68536 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-578533" [54d5ed69-0572-4d75-b2c2-868a2d48f048] Running
	I0719 19:40:16.599358   68536 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-578533" [ba65a51a-7ce6-4ba9-ad87-b2763c49732f] Running
	I0719 19:40:16.599365   68536 system_pods.go:89] "kube-proxy-xd6jl" [699a18fd-c6e6-4c85-9799-0630bb7932b9] Running
	I0719 19:40:16.599371   68536 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-578533" [28fe9666-6e36-479a-9908-0ed40fbe9655] Running
	I0719 19:40:16.599386   68536 system_pods.go:89] "metrics-server-569cc877fc-kz2bz" [dc109926-e072-47e1-944c-a9614741fdaa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:40:16.599392   68536 system_pods.go:89] "storage-provisioner" [732e14b5-3120-491f-be0f-2d2ffcd7a799] Running
	I0719 19:40:16.599408   68536 system_pods.go:126] duration metric: took 504.501449ms to wait for k8s-apps to be running ...
	I0719 19:40:16.599418   68536 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 19:40:16.599461   68536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:40:16.613644   68536 system_svc.go:56] duration metric: took 14.216133ms WaitForService to wait for kubelet
	I0719 19:40:16.613673   68536 kubeadm.go:582] duration metric: took 2.73876141s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 19:40:16.613691   68536 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:40:16.695470   68536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:40:16.695494   68536 node_conditions.go:123] node cpu capacity is 2
	I0719 19:40:16.695503   68536 node_conditions.go:105] duration metric: took 81.807505ms to run NodePressure ...
	I0719 19:40:16.695513   68536 start.go:241] waiting for startup goroutines ...
	I0719 19:40:16.695523   68536 start.go:246] waiting for cluster config update ...
	I0719 19:40:16.695533   68536 start.go:255] writing updated cluster config ...
	I0719 19:40:16.695817   68536 ssh_runner.go:195] Run: rm -f paused
	I0719 19:40:16.743733   68536 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 19:40:16.746063   68536 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-578533" cluster and "default" namespace by default
	I0719 19:40:17.805890   68019 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.059736517s)
	I0719 19:40:17.805973   68019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:40:17.821097   68019 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:40:17.831114   68019 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:40:17.840896   68019 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:40:17.840926   68019 kubeadm.go:157] found existing configuration files:
	
	I0719 19:40:17.840974   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:40:17.850003   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:40:17.850059   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:40:17.859304   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:40:17.868663   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:40:17.868711   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:40:17.877527   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:40:17.886554   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:40:17.886608   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:40:17.895713   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:40:17.904287   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:40:17.904352   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:40:17.913115   68019 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 19:40:17.956895   68019 kubeadm.go:310] W0719 19:40:17.928980    2926 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0719 19:40:17.958338   68019 kubeadm.go:310] W0719 19:40:17.930488    2926 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0719 19:40:18.073705   68019 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 19:40:26.630439   68019 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0719 19:40:26.630525   68019 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 19:40:26.630649   68019 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 19:40:26.630765   68019 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 19:40:26.630913   68019 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0719 19:40:26.631023   68019 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 19:40:26.632660   68019 out.go:204]   - Generating certificates and keys ...
	I0719 19:40:26.632745   68019 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 19:40:26.632833   68019 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 19:40:26.632906   68019 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 19:40:26.632975   68019 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 19:40:26.633093   68019 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 19:40:26.633184   68019 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 19:40:26.633270   68019 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 19:40:26.633378   68019 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 19:40:26.633474   68019 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 19:40:26.633572   68019 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 19:40:26.633627   68019 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 19:40:26.633705   68019 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 19:40:26.633780   68019 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 19:40:26.633868   68019 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 19:40:26.633942   68019 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 19:40:26.634024   68019 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 19:40:26.634119   68019 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 19:40:26.634250   68019 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 19:40:26.634354   68019 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 19:40:26.635957   68019 out.go:204]   - Booting up control plane ...
	I0719 19:40:26.636062   68019 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 19:40:26.636163   68019 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 19:40:26.636259   68019 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 19:40:26.636381   68019 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 19:40:26.636505   68019 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 19:40:26.636570   68019 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 19:40:26.636727   68019 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 19:40:26.636813   68019 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 19:40:26.636888   68019 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002206871s
	I0719 19:40:26.637000   68019 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 19:40:26.637088   68019 kubeadm.go:310] [api-check] The API server is healthy after 5.002825608s
	I0719 19:40:26.637239   68019 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 19:40:26.637387   68019 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 19:40:26.637470   68019 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 19:40:26.637681   68019 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-971041 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 19:40:26.637737   68019 kubeadm.go:310] [bootstrap-token] Using token: r7dsc0.erulzfzieqtgxnlo
	I0719 19:40:26.638982   68019 out.go:204]   - Configuring RBAC rules ...
	I0719 19:40:26.639079   68019 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 19:40:26.639165   68019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 19:40:26.639306   68019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 19:40:26.639427   68019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 19:40:26.639522   68019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 19:40:26.639599   68019 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 19:40:26.639692   68019 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 19:40:26.639731   68019 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 19:40:26.639770   68019 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 19:40:26.639779   68019 kubeadm.go:310] 
	I0719 19:40:26.639859   68019 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 19:40:26.639868   68019 kubeadm.go:310] 
	I0719 19:40:26.639935   68019 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 19:40:26.639941   68019 kubeadm.go:310] 
	I0719 19:40:26.639974   68019 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 19:40:26.640070   68019 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 19:40:26.640145   68019 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 19:40:26.640161   68019 kubeadm.go:310] 
	I0719 19:40:26.640242   68019 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 19:40:26.640249   68019 kubeadm.go:310] 
	I0719 19:40:26.640288   68019 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 19:40:26.640303   68019 kubeadm.go:310] 
	I0719 19:40:26.640386   68019 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 19:40:26.640476   68019 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 19:40:26.640572   68019 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 19:40:26.640582   68019 kubeadm.go:310] 
	I0719 19:40:26.640655   68019 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 19:40:26.640733   68019 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 19:40:26.640745   68019 kubeadm.go:310] 
	I0719 19:40:26.640827   68019 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token r7dsc0.erulzfzieqtgxnlo \
	I0719 19:40:26.640965   68019 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 \
	I0719 19:40:26.641007   68019 kubeadm.go:310] 	--control-plane 
	I0719 19:40:26.641015   68019 kubeadm.go:310] 
	I0719 19:40:26.641105   68019 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 19:40:26.641117   68019 kubeadm.go:310] 
	I0719 19:40:26.641187   68019 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token r7dsc0.erulzfzieqtgxnlo \
	I0719 19:40:26.641282   68019 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 
	I0719 19:40:26.641293   68019 cni.go:84] Creating CNI manager for ""
	I0719 19:40:26.641302   68019 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:40:26.642788   68019 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 19:40:26.643983   68019 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 19:40:26.655231   68019 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 19:40:26.673487   68019 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 19:40:26.673555   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:26.673573   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-971041 minikube.k8s.io/updated_at=2024_07_19T19_40_26_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221 minikube.k8s.io/name=no-preload-971041 minikube.k8s.io/primary=true
	I0719 19:40:26.702916   68019 ops.go:34] apiserver oom_adj: -16
	I0719 19:40:26.871642   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:27.372636   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:27.871695   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:28.372395   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:28.871975   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:29.371869   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:29.872364   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:30.372618   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:30.871974   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:31.372607   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:31.484697   68019 kubeadm.go:1113] duration metric: took 4.811203988s to wait for elevateKubeSystemPrivileges
	I0719 19:40:31.484735   68019 kubeadm.go:394] duration metric: took 4m59.38336524s to StartCluster
	I0719 19:40:31.484758   68019 settings.go:142] acquiring lock: {Name:mkcf95271f2ac91a55c2f4ae0f8a0ccaa24777be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:40:31.484852   68019 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:40:31.487571   68019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/kubeconfig: {Name:mk0bbb5f0de0e816a151363ad4427bbf9de52f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:40:31.487903   68019 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.119 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 19:40:31.488046   68019 config.go:182] Loaded profile config "no-preload-971041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 19:40:31.488017   68019 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 19:40:31.488091   68019 addons.go:69] Setting storage-provisioner=true in profile "no-preload-971041"
	I0719 19:40:31.488105   68019 addons.go:69] Setting default-storageclass=true in profile "no-preload-971041"
	I0719 19:40:31.488136   68019 addons.go:234] Setting addon storage-provisioner=true in "no-preload-971041"
	I0719 19:40:31.488136   68019 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-971041"
	W0719 19:40:31.488145   68019 addons.go:243] addon storage-provisioner should already be in state true
	I0719 19:40:31.488173   68019 host.go:66] Checking if "no-preload-971041" exists ...
	I0719 19:40:31.488097   68019 addons.go:69] Setting metrics-server=true in profile "no-preload-971041"
	I0719 19:40:31.488220   68019 addons.go:234] Setting addon metrics-server=true in "no-preload-971041"
	W0719 19:40:31.488229   68019 addons.go:243] addon metrics-server should already be in state true
	I0719 19:40:31.488253   68019 host.go:66] Checking if "no-preload-971041" exists ...
	I0719 19:40:31.488560   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.488566   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.488568   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.488588   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.488589   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.488590   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.489333   68019 out.go:177] * Verifying Kubernetes components...
	I0719 19:40:31.490709   68019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:40:31.505073   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35949
	I0719 19:40:31.505121   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36213
	I0719 19:40:31.505243   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34987
	I0719 19:40:31.505554   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.505575   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.505651   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.506136   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.506161   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.506138   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.506177   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.506369   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.506390   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.506521   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.506582   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.506762   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.506950   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetState
	I0719 19:40:31.507074   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.507101   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.507097   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.507134   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.510086   68019 addons.go:234] Setting addon default-storageclass=true in "no-preload-971041"
	W0719 19:40:31.510104   68019 addons.go:243] addon default-storageclass should already be in state true
	I0719 19:40:31.510154   68019 host.go:66] Checking if "no-preload-971041" exists ...
	I0719 19:40:31.510697   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.510727   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.523120   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36047
	I0719 19:40:31.523463   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45285
	I0719 19:40:31.523899   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.523950   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.524463   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.524479   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.524580   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.524590   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.524931   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.524973   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.525131   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetState
	I0719 19:40:31.525164   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetState
	I0719 19:40:31.526929   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:40:31.527498   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:40:31.527764   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42975
	I0719 19:40:31.528531   68019 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0719 19:40:31.528562   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.529323   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.529340   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.529389   68019 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:40:28.617451   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:40:28.617640   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:40:31.529719   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.530357   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.530369   68019 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 19:40:31.530383   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.530386   68019 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 19:40:31.530405   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:40:31.531478   68019 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:40:31.531494   68019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 19:40:31.531509   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:40:31.533843   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:40:31.534251   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:40:31.534279   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:40:31.534542   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:40:31.534701   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:40:31.534827   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:40:31.534968   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:40:31.538822   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:40:31.539296   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:40:31.539468   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:40:31.539598   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:40:31.539745   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:40:31.539894   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:40:31.540051   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:40:31.547930   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40325
	I0719 19:40:31.548415   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.548833   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.548852   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.549151   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.549309   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetState
	I0719 19:40:31.550633   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:40:31.550855   68019 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 19:40:31.550869   68019 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 19:40:31.550881   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:40:31.553528   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:40:31.554109   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:40:31.554134   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:40:31.554298   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:40:31.554650   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:40:31.554842   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:40:31.554962   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:40:31.716869   68019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:40:31.740991   68019 node_ready.go:35] waiting up to 6m0s for node "no-preload-971041" to be "Ready" ...
	I0719 19:40:31.749672   68019 node_ready.go:49] node "no-preload-971041" has status "Ready":"True"
	I0719 19:40:31.749700   68019 node_ready.go:38] duration metric: took 8.681441ms for node "no-preload-971041" to be "Ready" ...
	I0719 19:40:31.749713   68019 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:40:31.755037   68019 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-7kdqd" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:31.865867   68019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:40:31.866793   68019 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 19:40:31.866807   68019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0719 19:40:31.876604   68019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 19:40:31.890924   68019 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 19:40:31.890947   68019 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 19:40:31.930299   68019 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 19:40:31.930326   68019 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 19:40:31.982650   68019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 19:40:32.488961   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:32.488991   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:32.489025   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:32.489051   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:32.489355   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:32.489372   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:32.489382   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:32.489390   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:32.489398   68019 main.go:141] libmachine: (no-preload-971041) DBG | Closing plugin on server side
	I0719 19:40:32.489424   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:32.489431   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:32.489439   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:32.489446   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:32.489619   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:32.489639   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:32.489676   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:32.489689   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:32.489690   68019 main.go:141] libmachine: (no-preload-971041) DBG | Closing plugin on server side
	I0719 19:40:32.506595   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:32.506613   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:32.506974   68019 main.go:141] libmachine: (no-preload-971041) DBG | Closing plugin on server side
	I0719 19:40:32.506985   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:32.507001   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:33.019915   68019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.037215511s)
	I0719 19:40:33.019974   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:33.019991   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:33.020393   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:33.020409   68019 main.go:141] libmachine: (no-preload-971041) DBG | Closing plugin on server side
	I0719 19:40:33.020415   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:33.020433   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:33.020442   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:33.020806   68019 main.go:141] libmachine: (no-preload-971041) DBG | Closing plugin on server side
	I0719 19:40:33.020860   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:33.020875   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:33.020893   68019 addons.go:475] Verifying addon metrics-server=true in "no-preload-971041"
	I0719 19:40:33.023369   68019 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0719 19:40:33.024402   68019 addons.go:510] duration metric: took 1.536389874s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0719 19:40:33.763711   68019 pod_ready.go:102] pod "coredns-5cfdc65f69-7kdqd" in "kube-system" namespace has status "Ready":"False"
	I0719 19:40:36.262760   68019 pod_ready.go:102] pod "coredns-5cfdc65f69-7kdqd" in "kube-system" namespace has status "Ready":"False"
	I0719 19:40:38.761973   68019 pod_ready.go:102] pod "coredns-5cfdc65f69-7kdqd" in "kube-system" namespace has status "Ready":"False"
	I0719 19:40:39.261665   68019 pod_ready.go:92] pod "coredns-5cfdc65f69-7kdqd" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:39.261686   68019 pod_ready.go:81] duration metric: took 7.506626451s for pod "coredns-5cfdc65f69-7kdqd" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.261696   68019 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-d5xr6" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.268952   68019 pod_ready.go:92] pod "coredns-5cfdc65f69-d5xr6" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:39.268968   68019 pod_ready.go:81] duration metric: took 7.267035ms for pod "coredns-5cfdc65f69-d5xr6" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.268976   68019 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.274245   68019 pod_ready.go:92] pod "etcd-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:39.274263   68019 pod_ready.go:81] duration metric: took 5.279919ms for pod "etcd-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.274273   68019 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.278452   68019 pod_ready.go:92] pod "kube-apiserver-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:39.278469   68019 pod_ready.go:81] duration metric: took 4.189107ms for pod "kube-apiserver-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.278479   68019 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:40.285769   68019 pod_ready.go:92] pod "kube-controller-manager-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:40.285797   68019 pod_ready.go:81] duration metric: took 1.007309472s for pod "kube-controller-manager-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:40.285814   68019 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-445wh" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:40.459001   68019 pod_ready.go:92] pod "kube-proxy-445wh" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:40.459024   68019 pod_ready.go:81] duration metric: took 173.203584ms for pod "kube-proxy-445wh" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:40.459033   68019 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:40.858861   68019 pod_ready.go:92] pod "kube-scheduler-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:40.858890   68019 pod_ready.go:81] duration metric: took 399.849812ms for pod "kube-scheduler-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:40.858901   68019 pod_ready.go:38] duration metric: took 9.109176659s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:40:40.858919   68019 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:40:40.858980   68019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:40:40.873763   68019 api_server.go:72] duration metric: took 9.385790935s to wait for apiserver process to appear ...
	I0719 19:40:40.873791   68019 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:40:40.873813   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:40:40.878709   68019 api_server.go:279] https://192.168.50.119:8443/healthz returned 200:
	ok
	I0719 19:40:40.879656   68019 api_server.go:141] control plane version: v1.31.0-beta.0
	I0719 19:40:40.879677   68019 api_server.go:131] duration metric: took 5.8793ms to wait for apiserver health ...
	I0719 19:40:40.879686   68019 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:40:41.062525   68019 system_pods.go:59] 9 kube-system pods found
	I0719 19:40:41.062553   68019 system_pods.go:61] "coredns-5cfdc65f69-7kdqd" [1b0eeec5-f993-4d90-a3e0-5254bcc39f64] Running
	I0719 19:40:41.062561   68019 system_pods.go:61] "coredns-5cfdc65f69-d5xr6" [5f5a96c3-84ef-429c-b31d-472fcc247862] Running
	I0719 19:40:41.062565   68019 system_pods.go:61] "etcd-no-preload-971041" [8852dc62-d441-43c0-97e5-6fb333ee3d26] Running
	I0719 19:40:41.062569   68019 system_pods.go:61] "kube-apiserver-no-preload-971041" [083b50e1-e9fa-4e4b-a684-a6d461c7b45e] Running
	I0719 19:40:41.062573   68019 system_pods.go:61] "kube-controller-manager-no-preload-971041" [45ee5801-3b8a-4072-a4c6-716525a06d87] Running
	I0719 19:40:41.062576   68019 system_pods.go:61] "kube-proxy-445wh" [468d052a-a8d2-47f6-9054-813dd3ee33ee] Running
	I0719 19:40:41.062579   68019 system_pods.go:61] "kube-scheduler-no-preload-971041" [3989f412-bfc8-41c2-a9df-ef2b72f95f37] Running
	I0719 19:40:41.062584   68019 system_pods.go:61] "metrics-server-78fcd8795b-r6hdb" [13d26205-03d4-4719-bfa0-199a2e23b52a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:40:41.062588   68019 system_pods.go:61] "storage-provisioner" [fedb9d1a-139b-4627-8efa-64dd1481d8d4] Running
	I0719 19:40:41.062595   68019 system_pods.go:74] duration metric: took 182.903201ms to wait for pod list to return data ...
	I0719 19:40:41.062601   68019 default_sa.go:34] waiting for default service account to be created ...
	I0719 19:40:41.259408   68019 default_sa.go:45] found service account: "default"
	I0719 19:40:41.259432   68019 default_sa.go:55] duration metric: took 196.824864ms for default service account to be created ...
	I0719 19:40:41.259440   68019 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 19:40:41.462209   68019 system_pods.go:86] 9 kube-system pods found
	I0719 19:40:41.462243   68019 system_pods.go:89] "coredns-5cfdc65f69-7kdqd" [1b0eeec5-f993-4d90-a3e0-5254bcc39f64] Running
	I0719 19:40:41.462251   68019 system_pods.go:89] "coredns-5cfdc65f69-d5xr6" [5f5a96c3-84ef-429c-b31d-472fcc247862] Running
	I0719 19:40:41.462258   68019 system_pods.go:89] "etcd-no-preload-971041" [8852dc62-d441-43c0-97e5-6fb333ee3d26] Running
	I0719 19:40:41.462264   68019 system_pods.go:89] "kube-apiserver-no-preload-971041" [083b50e1-e9fa-4e4b-a684-a6d461c7b45e] Running
	I0719 19:40:41.462271   68019 system_pods.go:89] "kube-controller-manager-no-preload-971041" [45ee5801-3b8a-4072-a4c6-716525a06d87] Running
	I0719 19:40:41.462276   68019 system_pods.go:89] "kube-proxy-445wh" [468d052a-a8d2-47f6-9054-813dd3ee33ee] Running
	I0719 19:40:41.462282   68019 system_pods.go:89] "kube-scheduler-no-preload-971041" [3989f412-bfc8-41c2-a9df-ef2b72f95f37] Running
	I0719 19:40:41.462291   68019 system_pods.go:89] "metrics-server-78fcd8795b-r6hdb" [13d26205-03d4-4719-bfa0-199a2e23b52a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:40:41.462295   68019 system_pods.go:89] "storage-provisioner" [fedb9d1a-139b-4627-8efa-64dd1481d8d4] Running
	I0719 19:40:41.462304   68019 system_pods.go:126] duration metric: took 202.857782ms to wait for k8s-apps to be running ...
	I0719 19:40:41.462312   68019 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 19:40:41.462377   68019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:40:41.478453   68019 system_svc.go:56] duration metric: took 16.131704ms WaitForService to wait for kubelet
	I0719 19:40:41.478487   68019 kubeadm.go:582] duration metric: took 9.990516808s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 19:40:41.478510   68019 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:40:41.660126   68019 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:40:41.660154   68019 node_conditions.go:123] node cpu capacity is 2
	I0719 19:40:41.660165   68019 node_conditions.go:105] duration metric: took 181.649686ms to run NodePressure ...
	I0719 19:40:41.660176   68019 start.go:241] waiting for startup goroutines ...
	I0719 19:40:41.660182   68019 start.go:246] waiting for cluster config update ...
	I0719 19:40:41.660192   68019 start.go:255] writing updated cluster config ...
	I0719 19:40:41.660455   68019 ssh_runner.go:195] Run: rm -f paused
	I0719 19:40:41.707917   68019 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0719 19:40:41.710102   68019 out.go:177] * Done! kubectl is now configured to use "no-preload-971041" cluster and "default" namespace by default
	I0719 19:41:08.619807   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:41:08.620131   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:41:08.620145   68995 kubeadm.go:310] 
	I0719 19:41:08.620196   68995 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0719 19:41:08.620252   68995 kubeadm.go:310] 		timed out waiting for the condition
	I0719 19:41:08.620262   68995 kubeadm.go:310] 
	I0719 19:41:08.620302   68995 kubeadm.go:310] 	This error is likely caused by:
	I0719 19:41:08.620401   68995 kubeadm.go:310] 		- The kubelet is not running
	I0719 19:41:08.620615   68995 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0719 19:41:08.620632   68995 kubeadm.go:310] 
	I0719 19:41:08.620765   68995 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0719 19:41:08.620824   68995 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0719 19:41:08.620872   68995 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0719 19:41:08.620882   68995 kubeadm.go:310] 
	I0719 19:41:08.621033   68995 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0719 19:41:08.621168   68995 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0719 19:41:08.621179   68995 kubeadm.go:310] 
	I0719 19:41:08.621349   68995 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0719 19:41:08.621469   68995 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0719 19:41:08.621576   68995 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0719 19:41:08.621664   68995 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0719 19:41:08.621677   68995 kubeadm.go:310] 
	I0719 19:41:08.622038   68995 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 19:41:08.622184   68995 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0719 19:41:08.622310   68995 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0719 19:41:08.622452   68995 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0719 19:41:08.622521   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 19:41:09.080325   68995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:41:09.095325   68995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:41:09.104753   68995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:41:09.104771   68995 kubeadm.go:157] found existing configuration files:
	
	I0719 19:41:09.104818   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:41:09.113315   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:41:09.113375   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:41:09.122025   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:41:09.130333   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:41:09.130390   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:41:09.139159   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:41:09.147867   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:41:09.147913   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:41:09.156426   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:41:09.165249   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:41:09.165299   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:41:09.174271   68995 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 19:41:09.379598   68995 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 19:43:05.534759   68995 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0719 19:43:05.534861   68995 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0719 19:43:05.536588   68995 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0719 19:43:05.536645   68995 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 19:43:05.536728   68995 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 19:43:05.536857   68995 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 19:43:05.536952   68995 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 19:43:05.537043   68995 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 19:43:05.538861   68995 out.go:204]   - Generating certificates and keys ...
	I0719 19:43:05.538966   68995 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 19:43:05.539081   68995 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 19:43:05.539168   68995 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 19:43:05.539245   68995 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 19:43:05.539349   68995 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 19:43:05.539395   68995 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 19:43:05.539453   68995 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 19:43:05.539504   68995 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 19:43:05.539578   68995 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 19:43:05.539668   68995 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 19:43:05.539716   68995 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 19:43:05.539763   68995 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 19:43:05.539812   68995 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 19:43:05.539885   68995 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 19:43:05.539968   68995 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 19:43:05.540022   68995 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 19:43:05.540135   68995 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 19:43:05.540239   68995 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 19:43:05.540291   68995 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 19:43:05.540376   68995 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 19:43:05.542005   68995 out.go:204]   - Booting up control plane ...
	I0719 19:43:05.542089   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 19:43:05.542159   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 19:43:05.542219   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 19:43:05.542290   68995 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 19:43:05.542442   68995 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 19:43:05.542510   68995 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0719 19:43:05.542610   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:43:05.542826   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:43:05.542939   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:43:05.543171   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:43:05.543252   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:43:05.543470   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:43:05.543578   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:43:05.543759   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:43:05.543891   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:43:05.544130   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:43:05.544144   68995 kubeadm.go:310] 
	I0719 19:43:05.544185   68995 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0719 19:43:05.544237   68995 kubeadm.go:310] 		timed out waiting for the condition
	I0719 19:43:05.544252   68995 kubeadm.go:310] 
	I0719 19:43:05.544286   68995 kubeadm.go:310] 	This error is likely caused by:
	I0719 19:43:05.544316   68995 kubeadm.go:310] 		- The kubelet is not running
	I0719 19:43:05.544416   68995 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0719 19:43:05.544423   68995 kubeadm.go:310] 
	I0719 19:43:05.544574   68995 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0719 19:43:05.544628   68995 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0719 19:43:05.544675   68995 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0719 19:43:05.544683   68995 kubeadm.go:310] 
	I0719 19:43:05.544816   68995 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0719 19:43:05.544917   68995 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0719 19:43:05.544926   68995 kubeadm.go:310] 
	I0719 19:43:05.545025   68995 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0719 19:43:05.545099   68995 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0719 19:43:05.545183   68995 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0719 19:43:05.545275   68995 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0719 19:43:05.545312   68995 kubeadm.go:310] 
	I0719 19:43:05.545346   68995 kubeadm.go:394] duration metric: took 8m2.050412966s to StartCluster
	I0719 19:43:05.545390   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:43:05.545442   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:43:05.585066   68995 cri.go:89] found id: ""
	I0719 19:43:05.585101   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.585114   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:43:05.585123   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:43:05.585197   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:43:05.619270   68995 cri.go:89] found id: ""
	I0719 19:43:05.619298   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.619308   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:43:05.619316   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:43:05.619378   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:43:05.652886   68995 cri.go:89] found id: ""
	I0719 19:43:05.652924   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.652935   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:43:05.652943   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:43:05.653004   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:43:05.687430   68995 cri.go:89] found id: ""
	I0719 19:43:05.687464   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.687475   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:43:05.687482   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:43:05.687542   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:43:05.724176   68995 cri.go:89] found id: ""
	I0719 19:43:05.724207   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.724218   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:43:05.724226   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:43:05.724286   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:43:05.757047   68995 cri.go:89] found id: ""
	I0719 19:43:05.757070   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.757077   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:43:05.757083   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:43:05.757131   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:43:05.799705   68995 cri.go:89] found id: ""
	I0719 19:43:05.799733   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.799743   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:43:05.799751   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:43:05.799811   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:43:05.844622   68995 cri.go:89] found id: ""
	I0719 19:43:05.844647   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.844655   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:43:05.844665   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:43:05.844676   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:43:05.962843   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:43:05.962880   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:43:06.010157   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:43:06.010191   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:43:06.060738   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:43:06.060771   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:43:06.075609   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:43:06.075650   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:43:06.155814   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0719 19:43:06.155872   68995 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0719 19:43:06.155906   68995 out.go:239] * 
	W0719 19:43:06.155980   68995 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0719 19:43:06.156013   68995 out.go:239] * 
	W0719 19:43:06.156982   68995 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 19:43:06.160611   68995 out.go:177] 
	W0719 19:43:06.161797   68995 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0719 19:43:06.161854   68995 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0719 19:43:06.161873   68995 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0719 19:43:06.163354   68995 out.go:177] 
	
	
	==> CRI-O <==
	Jul 19 19:43:07 old-k8s-version-570369 crio[650]: time="2024-07-19 19:43:07.973066519Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721418187973045371,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=30f2381d-8509-4471-800e-c2cf0d75baa8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:43:07 old-k8s-version-570369 crio[650]: time="2024-07-19 19:43:07.973699869Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7d8b1e59-c3cf-462f-a871-1ba5ccb80cee name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:43:07 old-k8s-version-570369 crio[650]: time="2024-07-19 19:43:07.973783201Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7d8b1e59-c3cf-462f-a871-1ba5ccb80cee name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:43:07 old-k8s-version-570369 crio[650]: time="2024-07-19 19:43:07.973838695Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7d8b1e59-c3cf-462f-a871-1ba5ccb80cee name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:43:08 old-k8s-version-570369 crio[650]: time="2024-07-19 19:43:08.008005684Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1fdd6b91-9b38-46ab-a422-40a9104e0eee name=/runtime.v1.RuntimeService/Version
	Jul 19 19:43:08 old-k8s-version-570369 crio[650]: time="2024-07-19 19:43:08.008118591Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1fdd6b91-9b38-46ab-a422-40a9104e0eee name=/runtime.v1.RuntimeService/Version
	Jul 19 19:43:08 old-k8s-version-570369 crio[650]: time="2024-07-19 19:43:08.009667845Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8b0906bd-27da-4d7e-aecb-02ec48d56f05 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:43:08 old-k8s-version-570369 crio[650]: time="2024-07-19 19:43:08.010181235Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721418188010145344,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8b0906bd-27da-4d7e-aecb-02ec48d56f05 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:43:08 old-k8s-version-570369 crio[650]: time="2024-07-19 19:43:08.010717734Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7db75e5d-cc61-4cdb-9ea7-1245f5112a15 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:43:08 old-k8s-version-570369 crio[650]: time="2024-07-19 19:43:08.010801453Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7db75e5d-cc61-4cdb-9ea7-1245f5112a15 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:43:08 old-k8s-version-570369 crio[650]: time="2024-07-19 19:43:08.010856372Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7db75e5d-cc61-4cdb-9ea7-1245f5112a15 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:43:08 old-k8s-version-570369 crio[650]: time="2024-07-19 19:43:08.039938267Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bf471cb8-5791-4d09-9a59-0bdaa656331c name=/runtime.v1.RuntimeService/Version
	Jul 19 19:43:08 old-k8s-version-570369 crio[650]: time="2024-07-19 19:43:08.040060569Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bf471cb8-5791-4d09-9a59-0bdaa656331c name=/runtime.v1.RuntimeService/Version
	Jul 19 19:43:08 old-k8s-version-570369 crio[650]: time="2024-07-19 19:43:08.041079884Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9dc5868c-a637-4cb5-8a48-2af7997544a8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:43:08 old-k8s-version-570369 crio[650]: time="2024-07-19 19:43:08.041733105Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721418188041699350,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9dc5868c-a637-4cb5-8a48-2af7997544a8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:43:08 old-k8s-version-570369 crio[650]: time="2024-07-19 19:43:08.042262597Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=70349275-b98e-4cf6-b969-7c0c9e019cf0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:43:08 old-k8s-version-570369 crio[650]: time="2024-07-19 19:43:08.042364562Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=70349275-b98e-4cf6-b969-7c0c9e019cf0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:43:08 old-k8s-version-570369 crio[650]: time="2024-07-19 19:43:08.042457084Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=70349275-b98e-4cf6-b969-7c0c9e019cf0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:43:08 old-k8s-version-570369 crio[650]: time="2024-07-19 19:43:08.076873612Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f2aefe28-a77f-4073-bddf-edab6f73ff78 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:43:08 old-k8s-version-570369 crio[650]: time="2024-07-19 19:43:08.076950921Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f2aefe28-a77f-4073-bddf-edab6f73ff78 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:43:08 old-k8s-version-570369 crio[650]: time="2024-07-19 19:43:08.078140764Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5221b6c7-7efa-49d3-8dab-2347f615c12d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:43:08 old-k8s-version-570369 crio[650]: time="2024-07-19 19:43:08.079443370Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721418188079273810,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5221b6c7-7efa-49d3-8dab-2347f615c12d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:43:08 old-k8s-version-570369 crio[650]: time="2024-07-19 19:43:08.081214578Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=34ab34d6-01cc-47ba-97e5-5fd3f8a7dcb4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:43:08 old-k8s-version-570369 crio[650]: time="2024-07-19 19:43:08.082220221Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=34ab34d6-01cc-47ba-97e5-5fd3f8a7dcb4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:43:08 old-k8s-version-570369 crio[650]: time="2024-07-19 19:43:08.082322163Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=34ab34d6-01cc-47ba-97e5-5fd3f8a7dcb4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul19 19:34] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051232] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041151] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.651686] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.772958] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.529588] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.839428] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.060470] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070691] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.149713] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.157817] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.254351] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[Jul19 19:35] systemd-fstab-generator[833]: Ignoring "noauto" option for root device
	[  +0.082478] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.954850] systemd-fstab-generator[956]: Ignoring "noauto" option for root device
	[ +10.480938] kauditd_printk_skb: 46 callbacks suppressed
	[Jul19 19:39] systemd-fstab-generator[5050]: Ignoring "noauto" option for root device
	[Jul19 19:41] systemd-fstab-generator[5329]: Ignoring "noauto" option for root device
	[  +0.064414] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:43:08 up 8 min,  0 users,  load average: 0.10, 0.10, 0.06
	Linux old-k8s-version-570369 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 19 19:43:05 old-k8s-version-570369 kubelet[5505]: net/http.(*Transport).dialConn(0xc000866000, 0x4f7fe00, 0xc000120018, 0x0, 0xc0006da300, 0x5, 0xc000632ab0, 0x24, 0x0, 0xc000b75d40, ...)
	Jul 19 19:43:05 old-k8s-version-570369 kubelet[5505]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Jul 19 19:43:05 old-k8s-version-570369 kubelet[5505]: net/http.(*Transport).dialConnFor(0xc000866000, 0xc0004f33f0)
	Jul 19 19:43:05 old-k8s-version-570369 kubelet[5505]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Jul 19 19:43:05 old-k8s-version-570369 kubelet[5505]: created by net/http.(*Transport).queueForDial
	Jul 19 19:43:05 old-k8s-version-570369 kubelet[5505]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Jul 19 19:43:05 old-k8s-version-570369 kubelet[5505]: goroutine 153 [select]:
	Jul 19 19:43:05 old-k8s-version-570369 kubelet[5505]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc0006c9680, 0xc00021a501, 0xc000b89e80, 0xc000328200, 0xc000865d40, 0xc000865d00)
	Jul 19 19:43:05 old-k8s-version-570369 kubelet[5505]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Jul 19 19:43:05 old-k8s-version-570369 kubelet[5505]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc00021a5a0, 0x0, 0x0)
	Jul 19 19:43:05 old-k8s-version-570369 kubelet[5505]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Jul 19 19:43:05 old-k8s-version-570369 kubelet[5505]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0006dd180)
	Jul 19 19:43:05 old-k8s-version-570369 kubelet[5505]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Jul 19 19:43:05 old-k8s-version-570369 kubelet[5505]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jul 19 19:43:05 old-k8s-version-570369 kubelet[5505]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Jul 19 19:43:05 old-k8s-version-570369 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 19 19:43:05 old-k8s-version-570369 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 19 19:43:05 old-k8s-version-570369 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Jul 19 19:43:05 old-k8s-version-570369 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 19 19:43:05 old-k8s-version-570369 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 19 19:43:05 old-k8s-version-570369 kubelet[5543]: I0719 19:43:05.888654    5543 server.go:416] Version: v1.20.0
	Jul 19 19:43:05 old-k8s-version-570369 kubelet[5543]: I0719 19:43:05.888969    5543 server.go:837] Client rotation is on, will bootstrap in background
	Jul 19 19:43:05 old-k8s-version-570369 kubelet[5543]: I0719 19:43:05.891116    5543 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 19 19:43:05 old-k8s-version-570369 kubelet[5543]: W0719 19:43:05.893751    5543 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 19 19:43:05 old-k8s-version-570369 kubelet[5543]: I0719 19:43:05.894565    5543 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-570369 -n old-k8s-version-570369
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-570369 -n old-k8s-version-570369: exit status 2 (222.02699ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-570369" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (727.68s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0719 19:39:43.525323   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/functional-788436/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-889918 -n embed-certs-889918
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-19 19:48:13.081369947 +0000 UTC m=+5667.345132939
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-889918 -n embed-certs-889918
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-889918 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-889918 logs -n 25: (1.998676379s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p kubernetes-upgrade-821162                           | kubernetes-upgrade-821162    | jenkins | v1.33.1 | 19 Jul 24 19:25 UTC | 19 Jul 24 19:25 UTC |
	| start   | -p embed-certs-889918                                  | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:25 UTC | 19 Jul 24 19:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| pause   | -p pause-666146                                        | pause-666146                 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| unpause | -p pause-666146                                        | pause-666146                 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| pause   | -p pause-666146                                        | pause-666146                 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-666146                                        | pause-666146                 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-666146                                        | pause-666146                 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	| delete  | -p                                                     | disable-driver-mounts-994300 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | disable-driver-mounts-994300                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:27 UTC |
	|         | default-k8s-diff-port-578533                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-971041             | no-preload-971041            | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-971041                                   | no-preload-971041            | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-578533  | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:27 UTC | 19 Jul 24 19:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:27 UTC |                     |
	|         | default-k8s-diff-port-578533                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-889918            | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:27 UTC | 19 Jul 24 19:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-889918                                  | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-971041                  | no-preload-971041            | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-971041 --memory=2200                     | no-preload-971041            | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC | 19 Jul 24 19:40 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-570369        | old-k8s-version-570369       | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-578533       | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-889918                 | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:30 UTC | 19 Jul 24 19:40 UTC |
	|         | default-k8s-diff-port-578533                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-889918                                  | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:30 UTC | 19 Jul 24 19:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-570369                              | old-k8s-version-570369       | jenkins | v1.33.1 | 19 Jul 24 19:30 UTC | 19 Jul 24 19:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-570369             | old-k8s-version-570369       | jenkins | v1.33.1 | 19 Jul 24 19:31 UTC | 19 Jul 24 19:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-570369                              | old-k8s-version-570369       | jenkins | v1.33.1 | 19 Jul 24 19:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 19:31:02
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 19:31:02.001058   68995 out.go:291] Setting OutFile to fd 1 ...
	I0719 19:31:02.001478   68995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:31:02.001493   68995 out.go:304] Setting ErrFile to fd 2...
	I0719 19:31:02.001500   68995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:31:02.001955   68995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 19:31:02.002951   68995 out.go:298] Setting JSON to false
	I0719 19:31:02.003871   68995 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8005,"bootTime":1721409457,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 19:31:02.003932   68995 start.go:139] virtualization: kvm guest
	I0719 19:31:02.005683   68995 out.go:177] * [old-k8s-version-570369] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 19:31:02.007426   68995 notify.go:220] Checking for updates...
	I0719 19:31:02.007440   68995 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 19:31:02.008887   68995 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 19:31:02.010101   68995 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:31:02.011382   68995 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 19:31:02.012573   68995 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 19:31:02.013796   68995 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 19:31:02.015353   68995 config.go:182] Loaded profile config "old-k8s-version-570369": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0719 19:31:02.015711   68995 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:31:02.015750   68995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:31:02.030334   68995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42673
	I0719 19:31:02.030703   68995 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:31:02.031167   68995 main.go:141] libmachine: Using API Version  1
	I0719 19:31:02.031204   68995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:31:02.031520   68995 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:31:02.031688   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:31:02.033460   68995 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0719 19:31:02.034684   68995 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 19:31:02.034983   68995 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:31:02.035018   68995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:31:02.049101   68995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44905
	I0719 19:31:02.049495   68995 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:31:02.049992   68995 main.go:141] libmachine: Using API Version  1
	I0719 19:31:02.050006   68995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:31:02.050295   68995 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:31:02.050462   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:31:02.085393   68995 out.go:177] * Using the kvm2 driver based on existing profile
	I0719 19:31:02.086642   68995 start.go:297] selected driver: kvm2
	I0719 19:31:02.086658   68995 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-570369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:31:02.086766   68995 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 19:31:02.087430   68995 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 19:31:02.087497   68995 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19307-14841/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 19:31:02.102357   68995 install.go:137] /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0719 19:31:02.102731   68995 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 19:31:02.102797   68995 cni.go:84] Creating CNI manager for ""
	I0719 19:31:02.102814   68995 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:31:02.102857   68995 start.go:340] cluster config:
	{Name:old-k8s-version-570369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570369 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:31:02.102975   68995 iso.go:125] acquiring lock: {Name:mka126a3710ab8a115eb2a3299b735524430e5f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 19:31:02.104839   68995 out.go:177] * Starting "old-k8s-version-570369" primary control-plane node in "old-k8s-version-570369" cluster
	I0719 19:30:58.496088   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:02.106080   68995 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 19:31:02.106120   68995 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0719 19:31:02.106127   68995 cache.go:56] Caching tarball of preloaded images
	I0719 19:31:02.106200   68995 preload.go:172] Found /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 19:31:02.106217   68995 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0719 19:31:02.106343   68995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/config.json ...
	I0719 19:31:02.106543   68995 start.go:360] acquireMachinesLock for old-k8s-version-570369: {Name:mk45aeee801587237bb965fa260501e9fac6f233 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 19:31:04.576109   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:07.648198   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:13.728139   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:16.800162   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:22.880170   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:25.952158   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:32.032094   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:35.104127   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:41.184155   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:44.256157   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:50.336135   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:53.408126   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:59.488236   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:02.560149   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:08.640107   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:11.712145   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:17.792126   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:20.864127   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:26.944130   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:30.016118   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:36.096094   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:39.168160   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:45.248117   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:48.320210   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:54.400118   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:57.472159   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:03.552119   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:06.624189   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:12.704169   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:15.776085   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:21.856121   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:24.928169   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:31.008143   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:34.080169   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:40.160126   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:43.232131   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:49.312133   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:52.384155   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:55.388766   68536 start.go:364] duration metric: took 3m51.88250847s to acquireMachinesLock for "default-k8s-diff-port-578533"
	I0719 19:33:55.388818   68536 start.go:96] Skipping create...Using existing machine configuration
	I0719 19:33:55.388827   68536 fix.go:54] fixHost starting: 
	I0719 19:33:55.389286   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:33:55.389320   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:33:55.405253   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44877
	I0719 19:33:55.405681   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:33:55.406171   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:33:55.406199   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:33:55.406526   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:33:55.406725   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:33:55.406877   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetState
	I0719 19:33:55.408620   68536 fix.go:112] recreateIfNeeded on default-k8s-diff-port-578533: state=Stopped err=<nil>
	I0719 19:33:55.408664   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	W0719 19:33:55.408859   68536 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 19:33:55.410653   68536 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-578533" ...
	I0719 19:33:55.386190   68019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:33:55.386229   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetMachineName
	I0719 19:33:55.386570   68019 buildroot.go:166] provisioning hostname "no-preload-971041"
	I0719 19:33:55.386604   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetMachineName
	I0719 19:33:55.386798   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:33:55.388625   68019 machine.go:97] duration metric: took 4m37.402512771s to provisionDockerMachine
	I0719 19:33:55.388679   68019 fix.go:56] duration metric: took 4m37.424232342s for fixHost
	I0719 19:33:55.388689   68019 start.go:83] releasing machines lock for "no-preload-971041", held for 4m37.424257446s
	W0719 19:33:55.388716   68019 start.go:714] error starting host: provision: host is not running
	W0719 19:33:55.388829   68019 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0719 19:33:55.388840   68019 start.go:729] Will try again in 5 seconds ...
	I0719 19:33:55.412077   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Start
	I0719 19:33:55.412257   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Ensuring networks are active...
	I0719 19:33:55.413065   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Ensuring network default is active
	I0719 19:33:55.413504   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Ensuring network mk-default-k8s-diff-port-578533 is active
	I0719 19:33:55.413931   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Getting domain xml...
	I0719 19:33:55.414619   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Creating domain...
	I0719 19:33:56.637676   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting to get IP...
	I0719 19:33:56.638545   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:56.638912   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:56.638971   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:56.638901   69604 retry.go:31] will retry after 210.865266ms: waiting for machine to come up
	I0719 19:33:56.851496   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:56.851959   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:56.851986   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:56.851920   69604 retry.go:31] will retry after 336.006729ms: waiting for machine to come up
	I0719 19:33:57.189571   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:57.189916   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:57.189945   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:57.189885   69604 retry.go:31] will retry after 325.384423ms: waiting for machine to come up
	I0719 19:33:57.516504   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:57.517043   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:57.517072   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:57.516989   69604 retry.go:31] will retry after 438.127108ms: waiting for machine to come up
	I0719 19:33:57.956641   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:57.957091   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:57.957110   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:57.957066   69604 retry.go:31] will retry after 656.518154ms: waiting for machine to come up
	I0719 19:34:00.391238   68019 start.go:360] acquireMachinesLock for no-preload-971041: {Name:mk45aeee801587237bb965fa260501e9fac6f233 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 19:33:58.614988   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:58.615352   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:58.615398   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:58.615292   69604 retry.go:31] will retry after 854.967551ms: waiting for machine to come up
	I0719 19:33:59.471470   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:59.471922   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:59.471950   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:59.471872   69604 retry.go:31] will retry after 1.032891444s: waiting for machine to come up
	I0719 19:34:00.506554   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:00.507020   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:34:00.507051   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:34:00.506976   69604 retry.go:31] will retry after 1.30559154s: waiting for machine to come up
	I0719 19:34:01.814516   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:01.814877   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:34:01.814903   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:34:01.814817   69604 retry.go:31] will retry after 1.152581751s: waiting for machine to come up
	I0719 19:34:02.968913   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:02.969375   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:34:02.969399   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:34:02.969317   69604 retry.go:31] will retry after 1.415187183s: waiting for machine to come up
	I0719 19:34:04.386365   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:04.386906   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:34:04.386937   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:34:04.386839   69604 retry.go:31] will retry after 2.854689482s: waiting for machine to come up
	I0719 19:34:07.243500   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:07.244001   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:34:07.244031   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:34:07.243865   69604 retry.go:31] will retry after 2.536982272s: waiting for machine to come up
	I0719 19:34:09.783749   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:09.784224   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:34:09.784256   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:34:09.784163   69604 retry.go:31] will retry after 3.046898328s: waiting for machine to come up
	I0719 19:34:12.833907   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.834440   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Found IP for machine: 192.168.72.104
	I0719 19:34:12.834468   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Reserving static IP address...
	I0719 19:34:12.834484   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has current primary IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.835035   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-578533", mac: "52:54:00:85:b3:02", ip: "192.168.72.104"} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:12.835064   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | skip adding static IP to network mk-default-k8s-diff-port-578533 - found existing host DHCP lease matching {name: "default-k8s-diff-port-578533", mac: "52:54:00:85:b3:02", ip: "192.168.72.104"}
	I0719 19:34:12.835078   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Reserved static IP address: 192.168.72.104
	I0719 19:34:12.835094   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for SSH to be available...
	I0719 19:34:12.835109   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Getting to WaitForSSH function...
	I0719 19:34:12.837426   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.837740   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:12.837766   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.837864   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Using SSH client type: external
	I0719 19:34:12.837885   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Using SSH private key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa (-rw-------)
	I0719 19:34:12.837916   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 19:34:12.837930   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | About to run SSH command:
	I0719 19:34:12.837946   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | exit 0
	I0719 19:34:12.959728   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | SSH cmd err, output: <nil>: 
	I0719 19:34:12.960118   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetConfigRaw
	I0719 19:34:12.960737   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetIP
	I0719 19:34:12.963250   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.963645   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:12.963675   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.963868   68536 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/config.json ...
	I0719 19:34:12.964074   68536 machine.go:94] provisionDockerMachine start ...
	I0719 19:34:12.964092   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:34:12.964326   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:12.966667   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.966982   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:12.967017   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.967148   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:12.967318   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:12.967567   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:12.967727   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:12.967907   68536 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:12.968155   68536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0719 19:34:12.968169   68536 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 19:34:13.063641   68536 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 19:34:13.063685   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetMachineName
	I0719 19:34:13.064015   68536 buildroot.go:166] provisioning hostname "default-k8s-diff-port-578533"
	I0719 19:34:13.064045   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetMachineName
	I0719 19:34:13.064242   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.067158   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.067658   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.067783   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.067785   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.067994   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.068150   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.068309   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.068438   68536 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:13.068638   68536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0719 19:34:13.068653   68536 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-578533 && echo "default-k8s-diff-port-578533" | sudo tee /etc/hostname
	I0719 19:34:13.182279   68536 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-578533
	
	I0719 19:34:13.182304   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.185081   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.185417   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.185440   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.185585   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.185761   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.185925   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.186102   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.186237   68536 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:13.186400   68536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0719 19:34:13.186415   68536 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-578533' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-578533/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-578533' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 19:34:13.292203   68536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:34:13.292246   68536 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 19:34:13.292276   68536 buildroot.go:174] setting up certificates
	I0719 19:34:13.292285   68536 provision.go:84] configureAuth start
	I0719 19:34:13.292315   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetMachineName
	I0719 19:34:13.292630   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetIP
	I0719 19:34:13.295150   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.295466   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.295494   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.295628   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.297608   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.297896   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.297924   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.298039   68536 provision.go:143] copyHostCerts
	I0719 19:34:13.298129   68536 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 19:34:13.298148   68536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 19:34:13.298233   68536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 19:34:13.298352   68536 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 19:34:13.298364   68536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 19:34:13.298405   68536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 19:34:13.298491   68536 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 19:34:13.298503   68536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 19:34:13.298530   68536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 19:34:13.298587   68536 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-578533 san=[127.0.0.1 192.168.72.104 default-k8s-diff-port-578533 localhost minikube]
	I0719 19:34:13.367868   68536 provision.go:177] copyRemoteCerts
	I0719 19:34:13.367923   68536 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 19:34:13.367947   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.370508   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.370761   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.370783   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.370975   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.371154   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.371331   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.371456   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:34:13.984609   68617 start.go:364] duration metric: took 4m7.413784663s to acquireMachinesLock for "embed-certs-889918"
	I0719 19:34:13.984726   68617 start.go:96] Skipping create...Using existing machine configuration
	I0719 19:34:13.984746   68617 fix.go:54] fixHost starting: 
	I0719 19:34:13.985263   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:13.985332   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:14.001938   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42041
	I0719 19:34:14.002372   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:14.002864   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:14.002890   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:14.003223   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:14.003400   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:14.003545   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetState
	I0719 19:34:14.005102   68617 fix.go:112] recreateIfNeeded on embed-certs-889918: state=Stopped err=<nil>
	I0719 19:34:14.005141   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	W0719 19:34:14.005320   68617 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 19:34:14.007389   68617 out.go:177] * Restarting existing kvm2 VM for "embed-certs-889918" ...
	I0719 19:34:13.449436   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 19:34:13.472085   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0719 19:34:13.493354   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 19:34:13.515145   68536 provision.go:87] duration metric: took 222.847636ms to configureAuth
	I0719 19:34:13.515179   68536 buildroot.go:189] setting minikube options for container-runtime
	I0719 19:34:13.515350   68536 config.go:182] Loaded profile config "default-k8s-diff-port-578533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:34:13.515430   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.517974   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.518417   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.518449   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.518608   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.518818   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.519077   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.519266   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.519452   68536 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:13.519605   68536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0719 19:34:13.519619   68536 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 19:34:13.764107   68536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 19:34:13.764143   68536 machine.go:97] duration metric: took 800.055026ms to provisionDockerMachine
	I0719 19:34:13.764155   68536 start.go:293] postStartSetup for "default-k8s-diff-port-578533" (driver="kvm2")
	I0719 19:34:13.764165   68536 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 19:34:13.764182   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:34:13.764505   68536 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 19:34:13.764535   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.767239   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.767669   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.767692   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.767870   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.768050   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.768210   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.768356   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:34:13.846203   68536 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 19:34:13.850376   68536 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 19:34:13.850403   68536 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 19:34:13.850473   68536 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 19:34:13.850562   68536 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 19:34:13.850672   68536 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 19:34:13.859405   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:34:13.880565   68536 start.go:296] duration metric: took 116.397021ms for postStartSetup
	I0719 19:34:13.880605   68536 fix.go:56] duration metric: took 18.491777449s for fixHost
	I0719 19:34:13.880628   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.883335   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.883686   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.883716   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.883906   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.884071   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.884245   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.884390   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.884526   68536 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:13.884730   68536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0719 19:34:13.884743   68536 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 19:34:13.984426   68536 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721417653.961477719
	
	I0719 19:34:13.984454   68536 fix.go:216] guest clock: 1721417653.961477719
	I0719 19:34:13.984464   68536 fix.go:229] Guest: 2024-07-19 19:34:13.961477719 +0000 UTC Remote: 2024-07-19 19:34:13.88060979 +0000 UTC m=+250.513369231 (delta=80.867929ms)
	I0719 19:34:13.984498   68536 fix.go:200] guest clock delta is within tolerance: 80.867929ms
	I0719 19:34:13.984503   68536 start.go:83] releasing machines lock for "default-k8s-diff-port-578533", held for 18.59570944s
	I0719 19:34:13.984529   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:34:13.984822   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetIP
	I0719 19:34:13.987630   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.988146   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.988165   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.988322   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:34:13.988760   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:34:13.988961   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:34:13.989046   68536 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 19:34:13.989094   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.989197   68536 ssh_runner.go:195] Run: cat /version.json
	I0719 19:34:13.989218   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.991820   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.992135   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.992285   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.992329   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.992419   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.992558   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.992584   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.992608   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.992695   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.992798   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.992815   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.992957   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:34:13.992978   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.993119   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:34:14.102452   68536 ssh_runner.go:195] Run: systemctl --version
	I0719 19:34:14.108454   68536 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 19:34:14.256762   68536 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 19:34:14.263615   68536 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 19:34:14.263685   68536 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 19:34:14.280375   68536 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 19:34:14.280396   68536 start.go:495] detecting cgroup driver to use...
	I0719 19:34:14.280465   68536 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 19:34:14.297524   68536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 19:34:14.312273   68536 docker.go:217] disabling cri-docker service (if available) ...
	I0719 19:34:14.312334   68536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 19:34:14.326334   68536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 19:34:14.341338   68536 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 19:34:14.465631   68536 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 19:34:14.603249   68536 docker.go:233] disabling docker service ...
	I0719 19:34:14.603333   68536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 19:34:14.617420   68536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 19:34:14.630612   68536 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 19:34:14.776325   68536 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 19:34:14.890106   68536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 19:34:14.904436   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 19:34:14.921845   68536 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 19:34:14.921921   68536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:14.932155   68536 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 19:34:14.932220   68536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:14.942999   68536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:14.955449   68536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:14.966646   68536 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 19:34:14.977147   68536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:14.988224   68536 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:15.005941   68536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:15.018341   68536 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 19:34:15.028418   68536 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 19:34:15.028502   68536 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 19:34:15.040704   68536 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 19:34:15.050734   68536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:34:15.159960   68536 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 19:34:15.323020   68536 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 19:34:15.323095   68536 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 19:34:15.327549   68536 start.go:563] Will wait 60s for crictl version
	I0719 19:34:15.327616   68536 ssh_runner.go:195] Run: which crictl
	I0719 19:34:15.330907   68536 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 19:34:15.369980   68536 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 19:34:15.370062   68536 ssh_runner.go:195] Run: crio --version
	I0719 19:34:15.399065   68536 ssh_runner.go:195] Run: crio --version
	I0719 19:34:15.426462   68536 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 19:34:14.008924   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Start
	I0719 19:34:14.009109   68617 main.go:141] libmachine: (embed-certs-889918) Ensuring networks are active...
	I0719 19:34:14.009827   68617 main.go:141] libmachine: (embed-certs-889918) Ensuring network default is active
	I0719 19:34:14.010200   68617 main.go:141] libmachine: (embed-certs-889918) Ensuring network mk-embed-certs-889918 is active
	I0719 19:34:14.010575   68617 main.go:141] libmachine: (embed-certs-889918) Getting domain xml...
	I0719 19:34:14.011168   68617 main.go:141] libmachine: (embed-certs-889918) Creating domain...
	I0719 19:34:15.280485   68617 main.go:141] libmachine: (embed-certs-889918) Waiting to get IP...
	I0719 19:34:15.281463   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:15.281945   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:15.282019   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:15.281911   69727 retry.go:31] will retry after 309.614436ms: waiting for machine to come up
	I0719 19:34:15.593592   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:15.594042   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:15.594082   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:15.593990   69727 retry.go:31] will retry after 302.723172ms: waiting for machine to come up
	I0719 19:34:15.898674   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:15.899173   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:15.899203   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:15.899125   69727 retry.go:31] will retry after 419.891108ms: waiting for machine to come up
	I0719 19:34:16.320907   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:16.321399   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:16.321444   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:16.321365   69727 retry.go:31] will retry after 528.113112ms: waiting for machine to come up
	I0719 19:34:15.427710   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetIP
	I0719 19:34:15.430887   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:15.431461   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:15.431493   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:15.431702   68536 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0719 19:34:15.435670   68536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:34:15.447909   68536 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-578533 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-578533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.104 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 19:34:15.448033   68536 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 19:34:15.448080   68536 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:34:15.483687   68536 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0719 19:34:15.483774   68536 ssh_runner.go:195] Run: which lz4
	I0719 19:34:15.487578   68536 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 19:34:15.491770   68536 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 19:34:15.491799   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0719 19:34:16.808717   68536 crio.go:462] duration metric: took 1.321185682s to copy over tarball
	I0719 19:34:16.808781   68536 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 19:34:16.850971   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:16.851465   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:16.851497   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:16.851413   69727 retry.go:31] will retry after 734.178646ms: waiting for machine to come up
	I0719 19:34:17.587364   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:17.587763   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:17.587781   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:17.587722   69727 retry.go:31] will retry after 818.269359ms: waiting for machine to come up
	I0719 19:34:18.407178   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:18.407788   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:18.407815   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:18.407741   69727 retry.go:31] will retry after 1.018825067s: waiting for machine to come up
	I0719 19:34:19.427765   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:19.428247   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:19.428277   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:19.428196   69727 retry.go:31] will retry after 1.005679734s: waiting for machine to come up
	I0719 19:34:20.435288   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:20.435661   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:20.435693   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:20.435611   69727 retry.go:31] will retry after 1.648093342s: waiting for machine to come up
	I0719 19:34:19.006661   68536 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.197857661s)
	I0719 19:34:19.006696   68536 crio.go:469] duration metric: took 2.197951458s to extract the tarball
	I0719 19:34:19.006707   68536 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 19:34:19.043352   68536 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:34:19.083293   68536 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 19:34:19.083316   68536 cache_images.go:84] Images are preloaded, skipping loading
	I0719 19:34:19.083333   68536 kubeadm.go:934] updating node { 192.168.72.104 8444 v1.30.3 crio true true} ...
	I0719 19:34:19.083489   68536 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-578533 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-578533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 19:34:19.083593   68536 ssh_runner.go:195] Run: crio config
	I0719 19:34:19.129638   68536 cni.go:84] Creating CNI manager for ""
	I0719 19:34:19.129667   68536 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:34:19.129697   68536 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 19:34:19.129730   68536 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.104 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-578533 NodeName:default-k8s-diff-port-578533 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 19:34:19.129908   68536 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.104
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-578533"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 19:34:19.129996   68536 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 19:34:19.139345   68536 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 19:34:19.139423   68536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 19:34:19.147942   68536 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0719 19:34:19.163170   68536 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 19:34:19.179000   68536 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0719 19:34:19.195196   68536 ssh_runner.go:195] Run: grep 192.168.72.104	control-plane.minikube.internal$ /etc/hosts
	I0719 19:34:19.198941   68536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:34:19.210202   68536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:34:19.326530   68536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:34:19.343318   68536 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533 for IP: 192.168.72.104
	I0719 19:34:19.343352   68536 certs.go:194] generating shared ca certs ...
	I0719 19:34:19.343371   68536 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:34:19.343537   68536 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 19:34:19.343600   68536 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 19:34:19.343614   68536 certs.go:256] generating profile certs ...
	I0719 19:34:19.343739   68536 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/client.key
	I0719 19:34:19.343808   68536 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/apiserver.key.0a2b4bda
	I0719 19:34:19.343876   68536 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/proxy-client.key
	I0719 19:34:19.344051   68536 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 19:34:19.344112   68536 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 19:34:19.344126   68536 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 19:34:19.344155   68536 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 19:34:19.344183   68536 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 19:34:19.344211   68536 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 19:34:19.344270   68536 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:34:19.345114   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 19:34:19.382756   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 19:34:19.414239   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 19:34:19.448517   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 19:34:19.491082   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0719 19:34:19.514344   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 19:34:19.546679   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 19:34:19.569407   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 19:34:19.591638   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 19:34:19.613336   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 19:34:19.635118   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 19:34:19.656675   68536 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 19:34:19.672292   68536 ssh_runner.go:195] Run: openssl version
	I0719 19:34:19.678232   68536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 19:34:19.688472   68536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 19:34:19.692656   68536 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 19:34:19.692714   68536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 19:34:19.698240   68536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 19:34:19.707912   68536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 19:34:19.718120   68536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:34:19.722206   68536 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:34:19.722257   68536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:34:19.727641   68536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 19:34:19.737694   68536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 19:34:19.747354   68536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 19:34:19.751343   68536 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 19:34:19.751389   68536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 19:34:19.756539   68536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 19:34:19.766256   68536 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 19:34:19.770309   68536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 19:34:19.775957   68536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 19:34:19.781384   68536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 19:34:19.786780   68536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 19:34:19.792104   68536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 19:34:19.797536   68536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 19:34:19.802918   68536 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-578533 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-578533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.104 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:34:19.803026   68536 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 19:34:19.803100   68536 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:34:19.837950   68536 cri.go:89] found id: ""
	I0719 19:34:19.838029   68536 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 19:34:19.847639   68536 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 19:34:19.847667   68536 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 19:34:19.847718   68536 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 19:34:19.856631   68536 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 19:34:19.857641   68536 kubeconfig.go:125] found "default-k8s-diff-port-578533" server: "https://192.168.72.104:8444"
	I0719 19:34:19.859821   68536 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 19:34:19.868724   68536 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.104
	I0719 19:34:19.868764   68536 kubeadm.go:1160] stopping kube-system containers ...
	I0719 19:34:19.868778   68536 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 19:34:19.868844   68536 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:34:19.907378   68536 cri.go:89] found id: ""
	I0719 19:34:19.907445   68536 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 19:34:19.923744   68536 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:34:19.933425   68536 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:34:19.933444   68536 kubeadm.go:157] found existing configuration files:
	
	I0719 19:34:19.933508   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0719 19:34:19.942406   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:34:19.942458   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:34:19.951275   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0719 19:34:19.960233   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:34:19.960296   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:34:19.969289   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0719 19:34:19.977401   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:34:19.977458   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:34:19.986298   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0719 19:34:19.994572   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:34:19.994640   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:34:20.003270   68536 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:34:20.012009   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:20.120255   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:21.398914   68536 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.278623256s)
	I0719 19:34:21.398951   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:21.594827   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:21.650132   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:21.717440   68536 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:34:21.717566   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:22.218530   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:22.718618   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:23.218370   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:22.084801   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:22.085201   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:22.085230   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:22.085166   69727 retry.go:31] will retry after 1.686559667s: waiting for machine to come up
	I0719 19:34:23.773486   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:23.774013   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:23.774039   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:23.773982   69727 retry.go:31] will retry after 2.360164076s: waiting for machine to come up
	I0719 19:34:26.136972   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:26.137506   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:26.137528   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:26.137451   69727 retry.go:31] will retry after 2.922244999s: waiting for machine to come up
	I0719 19:34:23.717755   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:24.217721   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:24.232459   68536 api_server.go:72] duration metric: took 2.515020251s to wait for apiserver process to appear ...
	I0719 19:34:24.232486   68536 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:34:24.232513   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:29.061452   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:29.062023   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:29.062056   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:29.061975   69727 retry.go:31] will retry after 3.928406856s: waiting for machine to come up
	I0719 19:34:29.232943   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:34:29.233002   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:34.468635   68995 start.go:364] duration metric: took 3m32.362055744s to acquireMachinesLock for "old-k8s-version-570369"
	I0719 19:34:34.468688   68995 start.go:96] Skipping create...Using existing machine configuration
	I0719 19:34:34.468696   68995 fix.go:54] fixHost starting: 
	I0719 19:34:34.469206   68995 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:34.469246   68995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:34.488843   68995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36477
	I0719 19:34:34.489238   68995 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:34.489682   68995 main.go:141] libmachine: Using API Version  1
	I0719 19:34:34.489708   68995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:34.490007   68995 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:34.490191   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:34.490327   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetState
	I0719 19:34:34.491830   68995 fix.go:112] recreateIfNeeded on old-k8s-version-570369: state=Stopped err=<nil>
	I0719 19:34:34.491875   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	W0719 19:34:34.492029   68995 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 19:34:34.493991   68995 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-570369" ...
	I0719 19:34:32.995044   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:32.995539   68617 main.go:141] libmachine: (embed-certs-889918) Found IP for machine: 192.168.61.107
	I0719 19:34:32.995571   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has current primary IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:32.995583   68617 main.go:141] libmachine: (embed-certs-889918) Reserving static IP address...
	I0719 19:34:32.996234   68617 main.go:141] libmachine: (embed-certs-889918) Reserved static IP address: 192.168.61.107
	I0719 19:34:32.996289   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "embed-certs-889918", mac: "52:54:00:5d:44:ce", ip: "192.168.61.107"} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:32.996409   68617 main.go:141] libmachine: (embed-certs-889918) Waiting for SSH to be available...
	I0719 19:34:32.996438   68617 main.go:141] libmachine: (embed-certs-889918) DBG | skip adding static IP to network mk-embed-certs-889918 - found existing host DHCP lease matching {name: "embed-certs-889918", mac: "52:54:00:5d:44:ce", ip: "192.168.61.107"}
	I0719 19:34:32.996476   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Getting to WaitForSSH function...
	I0719 19:34:32.998802   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:32.999144   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:32.999166   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:32.999286   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Using SSH client type: external
	I0719 19:34:32.999310   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Using SSH private key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa (-rw-------)
	I0719 19:34:32.999352   68617 main.go:141] libmachine: (embed-certs-889918) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 19:34:32.999371   68617 main.go:141] libmachine: (embed-certs-889918) DBG | About to run SSH command:
	I0719 19:34:32.999404   68617 main.go:141] libmachine: (embed-certs-889918) DBG | exit 0
	I0719 19:34:33.128231   68617 main.go:141] libmachine: (embed-certs-889918) DBG | SSH cmd err, output: <nil>: 
	I0719 19:34:33.128662   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetConfigRaw
	I0719 19:34:33.129434   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetIP
	I0719 19:34:33.132282   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.132578   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.132613   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.132825   68617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/config.json ...
	I0719 19:34:33.133051   68617 machine.go:94] provisionDockerMachine start ...
	I0719 19:34:33.133071   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:33.133272   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:33.135294   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.135617   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.135645   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.135727   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:33.135898   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.136064   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.136208   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:33.136358   68617 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:33.136534   68617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0719 19:34:33.136545   68617 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 19:34:33.247702   68617 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 19:34:33.247736   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetMachineName
	I0719 19:34:33.248042   68617 buildroot.go:166] provisioning hostname "embed-certs-889918"
	I0719 19:34:33.248072   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetMachineName
	I0719 19:34:33.248288   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:33.251010   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.251506   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.251536   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.251739   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:33.251936   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.252136   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.252329   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:33.252586   68617 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:33.252822   68617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0719 19:34:33.252848   68617 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-889918 && echo "embed-certs-889918" | sudo tee /etc/hostname
	I0719 19:34:33.379987   68617 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-889918
	
	I0719 19:34:33.380013   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:33.382728   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.383037   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.383059   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.383212   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:33.383482   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.383662   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.383805   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:33.384004   68617 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:33.384193   68617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0719 19:34:33.384211   68617 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-889918' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-889918/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-889918' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 19:34:33.503798   68617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:34:33.503827   68617 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 19:34:33.503870   68617 buildroot.go:174] setting up certificates
	I0719 19:34:33.503884   68617 provision.go:84] configureAuth start
	I0719 19:34:33.503896   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetMachineName
	I0719 19:34:33.504125   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetIP
	I0719 19:34:33.506371   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.506777   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.506809   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.506914   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:33.509368   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.509751   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.509783   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.509892   68617 provision.go:143] copyHostCerts
	I0719 19:34:33.509957   68617 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 19:34:33.509967   68617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 19:34:33.510035   68617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 19:34:33.510150   68617 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 19:34:33.510163   68617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 19:34:33.510194   68617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 19:34:33.510269   68617 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 19:34:33.510280   68617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 19:34:33.510306   68617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 19:34:33.510369   68617 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.embed-certs-889918 san=[127.0.0.1 192.168.61.107 embed-certs-889918 localhost minikube]
	I0719 19:34:33.799515   68617 provision.go:177] copyRemoteCerts
	I0719 19:34:33.799571   68617 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 19:34:33.799595   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:33.802495   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.802789   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.802817   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.803020   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:33.803245   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.803453   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:33.803618   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:33.889555   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 19:34:33.912176   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0719 19:34:33.934472   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 19:34:33.956877   68617 provision.go:87] duration metric: took 452.977402ms to configureAuth
	I0719 19:34:33.956906   68617 buildroot.go:189] setting minikube options for container-runtime
	I0719 19:34:33.957113   68617 config.go:182] Loaded profile config "embed-certs-889918": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:34:33.957190   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:33.960120   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.960551   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.960583   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.960761   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:33.960948   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.961094   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.961221   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:33.961368   68617 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:33.961587   68617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0719 19:34:33.961608   68617 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 19:34:34.222464   68617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 19:34:34.222508   68617 machine.go:97] duration metric: took 1.089428018s to provisionDockerMachine
	I0719 19:34:34.222519   68617 start.go:293] postStartSetup for "embed-certs-889918" (driver="kvm2")
	I0719 19:34:34.222530   68617 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 19:34:34.222544   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:34.222899   68617 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 19:34:34.222924   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:34.225636   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.226063   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:34.226093   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.226259   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:34.226491   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:34.226686   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:34.226918   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:34.314782   68617 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 19:34:34.318940   68617 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 19:34:34.318966   68617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 19:34:34.319044   68617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 19:34:34.319122   68617 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 19:34:34.319223   68617 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 19:34:34.328873   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:34:34.351760   68617 start.go:296] duration metric: took 129.22587ms for postStartSetup
	I0719 19:34:34.351804   68617 fix.go:56] duration metric: took 20.367059026s for fixHost
	I0719 19:34:34.351821   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:34.354759   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.355195   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:34.355228   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.355405   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:34.355619   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:34.355821   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:34.355994   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:34.356154   68617 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:34.356371   68617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0719 19:34:34.356386   68617 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 19:34:34.468486   68617 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721417674.441309314
	
	I0719 19:34:34.468508   68617 fix.go:216] guest clock: 1721417674.441309314
	I0719 19:34:34.468514   68617 fix.go:229] Guest: 2024-07-19 19:34:34.441309314 +0000 UTC Remote: 2024-07-19 19:34:34.35180736 +0000 UTC m=+267.913178040 (delta=89.501954ms)
	I0719 19:34:34.468531   68617 fix.go:200] guest clock delta is within tolerance: 89.501954ms
	I0719 19:34:34.468535   68617 start.go:83] releasing machines lock for "embed-certs-889918", held for 20.483881359s
	I0719 19:34:34.468559   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:34.468844   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetIP
	I0719 19:34:34.471920   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.472311   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:34.472346   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.472517   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:34.473026   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:34.473229   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:34.473305   68617 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 19:34:34.473364   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:34.473484   68617 ssh_runner.go:195] Run: cat /version.json
	I0719 19:34:34.473513   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:34.476087   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.476262   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.476545   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:34.476571   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.476614   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:34.476834   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:34.476835   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.476988   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:34.477070   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:34.477226   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:34.477293   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:34.477383   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:34.477453   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:34.477536   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:34.560612   68617 ssh_runner.go:195] Run: systemctl --version
	I0719 19:34:34.593072   68617 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 19:34:34.734988   68617 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 19:34:34.741527   68617 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 19:34:34.741637   68617 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 19:34:34.758845   68617 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 19:34:34.758869   68617 start.go:495] detecting cgroup driver to use...
	I0719 19:34:34.758924   68617 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 19:34:34.774350   68617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 19:34:34.787003   68617 docker.go:217] disabling cri-docker service (if available) ...
	I0719 19:34:34.787074   68617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 19:34:34.801080   68617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 19:34:34.815679   68617 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 19:34:34.940970   68617 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 19:34:35.101668   68617 docker.go:233] disabling docker service ...
	I0719 19:34:35.101744   68617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 19:34:35.116074   68617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 19:34:35.132085   68617 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 19:34:35.254387   68617 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 19:34:35.380270   68617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 19:34:35.394387   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 19:34:35.411794   68617 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 19:34:35.411877   68617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.421927   68617 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 19:34:35.421978   68617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.432790   68617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.445416   68617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.457170   68617 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 19:34:35.467762   68617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.478775   68617 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.498894   68617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.509643   68617 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 19:34:35.520089   68617 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 19:34:35.520138   68617 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 19:34:35.532934   68617 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 19:34:35.543249   68617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:34:35.702678   68617 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 19:34:35.843858   68617 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 19:34:35.843944   68617 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 19:34:35.848599   68617 start.go:563] Will wait 60s for crictl version
	I0719 19:34:35.848677   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:34:35.852436   68617 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 19:34:35.889698   68617 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 19:34:35.889782   68617 ssh_runner.go:195] Run: crio --version
	I0719 19:34:35.917297   68617 ssh_runner.go:195] Run: crio --version
	I0719 19:34:35.948248   68617 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 19:34:35.949617   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetIP
	I0719 19:34:35.952938   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:35.953304   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:35.953340   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:35.953554   68617 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0719 19:34:35.957815   68617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:34:35.971246   68617 kubeadm.go:883] updating cluster {Name:embed-certs-889918 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-889918 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 19:34:35.971572   68617 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 19:34:35.971628   68617 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:34:36.012019   68617 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0719 19:34:36.012093   68617 ssh_runner.go:195] Run: which lz4
	I0719 19:34:36.017026   68617 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 19:34:36.021166   68617 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 19:34:36.021200   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0719 19:34:34.495371   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .Start
	I0719 19:34:34.495553   68995 main.go:141] libmachine: (old-k8s-version-570369) Ensuring networks are active...
	I0719 19:34:34.496391   68995 main.go:141] libmachine: (old-k8s-version-570369) Ensuring network default is active
	I0719 19:34:34.496864   68995 main.go:141] libmachine: (old-k8s-version-570369) Ensuring network mk-old-k8s-version-570369 is active
	I0719 19:34:34.497347   68995 main.go:141] libmachine: (old-k8s-version-570369) Getting domain xml...
	I0719 19:34:34.498147   68995 main.go:141] libmachine: (old-k8s-version-570369) Creating domain...
	I0719 19:34:35.772867   68995 main.go:141] libmachine: (old-k8s-version-570369) Waiting to get IP...
	I0719 19:34:35.773913   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:35.774431   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:35.774493   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:35.774403   69904 retry.go:31] will retry after 240.258158ms: waiting for machine to come up
	I0719 19:34:36.016002   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:36.016573   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:36.016602   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:36.016519   69904 retry.go:31] will retry after 386.689334ms: waiting for machine to come up
	I0719 19:34:36.405411   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:36.405914   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:36.405957   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:36.405885   69904 retry.go:31] will retry after 295.472664ms: waiting for machine to come up
	I0719 19:34:36.703587   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:36.704082   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:36.704113   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:36.704040   69904 retry.go:31] will retry after 436.936059ms: waiting for machine to come up
	I0719 19:34:34.233826   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:34:34.233873   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:37.351373   68617 crio.go:462] duration metric: took 1.334375789s to copy over tarball
	I0719 19:34:37.351439   68617 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 19:34:39.633138   68617 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.281647989s)
	I0719 19:34:39.633178   68617 crio.go:469] duration metric: took 2.281778927s to extract the tarball
	I0719 19:34:39.633187   68617 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 19:34:39.671219   68617 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:34:39.716854   68617 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 19:34:39.716878   68617 cache_images.go:84] Images are preloaded, skipping loading
	I0719 19:34:39.716885   68617 kubeadm.go:934] updating node { 192.168.61.107 8443 v1.30.3 crio true true} ...
	I0719 19:34:39.717014   68617 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-889918 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-889918 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 19:34:39.717090   68617 ssh_runner.go:195] Run: crio config
	I0719 19:34:39.764571   68617 cni.go:84] Creating CNI manager for ""
	I0719 19:34:39.764597   68617 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:34:39.764610   68617 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 19:34:39.764633   68617 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.107 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-889918 NodeName:embed-certs-889918 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 19:34:39.764796   68617 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-889918"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 19:34:39.764873   68617 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 19:34:39.774416   68617 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 19:34:39.774479   68617 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 19:34:39.783899   68617 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0719 19:34:39.799538   68617 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 19:34:39.815241   68617 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0719 19:34:39.831507   68617 ssh_runner.go:195] Run: grep 192.168.61.107	control-plane.minikube.internal$ /etc/hosts
	I0719 19:34:39.835145   68617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:34:39.846125   68617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:34:39.958886   68617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:34:39.975440   68617 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918 for IP: 192.168.61.107
	I0719 19:34:39.975469   68617 certs.go:194] generating shared ca certs ...
	I0719 19:34:39.975488   68617 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:34:39.975688   68617 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 19:34:39.975742   68617 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 19:34:39.975755   68617 certs.go:256] generating profile certs ...
	I0719 19:34:39.975891   68617 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/client.key
	I0719 19:34:39.975975   68617 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/apiserver.key.edbb2bee
	I0719 19:34:39.976027   68617 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/proxy-client.key
	I0719 19:34:39.976166   68617 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 19:34:39.976206   68617 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 19:34:39.976214   68617 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 19:34:39.976242   68617 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 19:34:39.976267   68617 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 19:34:39.976317   68617 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 19:34:39.976368   68617 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:34:39.976983   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 19:34:40.011234   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 19:34:40.057204   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 19:34:40.079930   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 19:34:40.102045   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0719 19:34:40.124479   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 19:34:40.153087   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 19:34:40.183427   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 19:34:40.206893   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 19:34:40.230582   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 19:34:40.254384   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 19:34:40.278454   68617 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 19:34:40.297538   68617 ssh_runner.go:195] Run: openssl version
	I0719 19:34:40.303579   68617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 19:34:40.315682   68617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 19:34:40.320283   68617 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 19:34:40.320356   68617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 19:34:40.326353   68617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 19:34:40.338564   68617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 19:34:40.349901   68617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:34:40.354229   68617 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:34:40.354291   68617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:34:40.359773   68617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 19:34:40.370212   68617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 19:34:40.383548   68617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 19:34:40.388929   68617 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 19:34:40.388981   68617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 19:34:40.396108   68617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 19:34:40.409065   68617 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 19:34:40.413154   68617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 19:34:40.418763   68617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 19:34:40.426033   68617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 19:34:40.431826   68617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 19:34:40.437469   68617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 19:34:40.443195   68617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 19:34:40.448932   68617 kubeadm.go:392] StartCluster: {Name:embed-certs-889918 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-889918 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:34:40.449086   68617 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 19:34:40.449153   68617 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:34:40.483766   68617 cri.go:89] found id: ""
	I0719 19:34:40.483867   68617 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 19:34:40.493593   68617 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 19:34:40.493618   68617 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 19:34:40.493668   68617 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 19:34:40.502750   68617 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 19:34:40.503917   68617 kubeconfig.go:125] found "embed-certs-889918" server: "https://192.168.61.107:8443"
	I0719 19:34:40.505701   68617 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 19:34:40.516036   68617 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.107
	I0719 19:34:40.516067   68617 kubeadm.go:1160] stopping kube-system containers ...
	I0719 19:34:40.516077   68617 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 19:34:40.516144   68617 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:34:40.562686   68617 cri.go:89] found id: ""
	I0719 19:34:40.562771   68617 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 19:34:40.578349   68617 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:34:40.587316   68617 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:34:40.587349   68617 kubeadm.go:157] found existing configuration files:
	
	I0719 19:34:40.587421   68617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:34:40.597112   68617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:34:40.597165   68617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:34:40.606941   68617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:34:40.616452   68617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:34:40.616495   68617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:34:40.626308   68617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:34:40.635642   68617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:34:40.635683   68617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:34:40.645499   68617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:34:40.655029   68617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:34:40.655080   68617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:34:40.665113   68617 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:34:40.673843   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:40.785511   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:37.142840   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:37.143334   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:37.143364   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:37.143287   69904 retry.go:31] will retry after 500.03346ms: waiting for machine to come up
	I0719 19:34:37.645270   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:37.645920   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:37.645949   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:37.645856   69904 retry.go:31] will retry after 803.488078ms: waiting for machine to come up
	I0719 19:34:38.450546   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:38.451043   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:38.451062   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:38.451000   69904 retry.go:31] will retry after 791.584627ms: waiting for machine to come up
	I0719 19:34:39.244154   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:39.244564   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:39.244597   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:39.244541   69904 retry.go:31] will retry after 1.090004215s: waiting for machine to come up
	I0719 19:34:40.335726   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:40.336278   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:40.336304   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:40.336240   69904 retry.go:31] will retry after 1.629231127s: waiting for machine to come up
	I0719 19:34:41.966947   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:41.967475   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:41.967497   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:41.967436   69904 retry.go:31] will retry after 2.06824211s: waiting for machine to come up
	I0719 19:34:39.234200   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:34:39.234245   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:41.592081   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:41.803419   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:41.898042   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:42.005198   68617 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:34:42.005290   68617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:42.505705   68617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:43.005532   68617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:43.505668   68617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:43.519708   68617 api_server.go:72] duration metric: took 1.514509053s to wait for apiserver process to appear ...
	I0719 19:34:43.519740   68617 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:34:43.519763   68617 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0719 19:34:46.288947   68617 api_server.go:279] https://192.168.61.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:34:46.288983   68617 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:34:46.289000   68617 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0719 19:34:46.306114   68617 api_server.go:279] https://192.168.61.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:34:46.306137   68617 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:34:44.038853   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:44.039330   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:44.039352   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:44.039303   69904 retry.go:31] will retry after 2.688374782s: waiting for machine to come up
	I0719 19:34:46.730865   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:46.731354   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:46.731377   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:46.731316   69904 retry.go:31] will retry after 3.054833322s: waiting for machine to come up
	I0719 19:34:46.519995   68617 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0719 19:34:46.524219   68617 api_server.go:279] https://192.168.61.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:34:46.524260   68617 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:34:47.020849   68617 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0719 19:34:47.029117   68617 api_server.go:279] https://192.168.61.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:34:47.029145   68617 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:34:47.520815   68617 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0719 19:34:47.525739   68617 api_server.go:279] https://192.168.61.107:8443/healthz returned 200:
	ok
	I0719 19:34:47.532240   68617 api_server.go:141] control plane version: v1.30.3
	I0719 19:34:47.532277   68617 api_server.go:131] duration metric: took 4.012528013s to wait for apiserver health ...
	I0719 19:34:47.532290   68617 cni.go:84] Creating CNI manager for ""
	I0719 19:34:47.532300   68617 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:34:47.533969   68617 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 19:34:44.235220   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:34:44.235292   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:44.372339   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": read tcp 192.168.72.1:52608->192.168.72.104:8444: read: connection reset by peer
	I0719 19:34:44.732722   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:44.733383   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": dial tcp 192.168.72.104:8444: connect: connection refused
	I0719 19:34:45.232769   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:47.535361   68617 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 19:34:47.545742   68617 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 19:34:47.570944   68617 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:34:47.582007   68617 system_pods.go:59] 8 kube-system pods found
	I0719 19:34:47.582046   68617 system_pods.go:61] "coredns-7db6d8ff4d-7h48w" [dd691a13-e571-4148-bef0-ba0552277312] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 19:34:47.582065   68617 system_pods.go:61] "etcd-embed-certs-889918" [68eae3c2-11de-4c70-bc28-1f34e1e868fc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 19:34:47.582075   68617 system_pods.go:61] "kube-apiserver-embed-certs-889918" [48f50846-bc9e-444e-a569-98d28550dcff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 19:34:47.582083   68617 system_pods.go:61] "kube-controller-manager-embed-certs-889918" [92e4da69-acef-44a2-9301-726c70294727] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 19:34:47.582091   68617 system_pods.go:61] "kube-proxy-5fcgs" [67ef345b-f2ef-4e18-a55c-91e2c08a37b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0719 19:34:47.582097   68617 system_pods.go:61] "kube-scheduler-embed-certs-889918" [3560778a-031a-486f-9f53-77c243737daf] Running
	I0719 19:34:47.582113   68617 system_pods.go:61] "metrics-server-569cc877fc-sfc85" [fc2a1efe-1728-42d9-bd3d-9eb29af783e9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:34:47.582121   68617 system_pods.go:61] "storage-provisioner" [32274055-6a2d-4cf9-8ae8-3c5f7cdc66db] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 19:34:47.582131   68617 system_pods.go:74] duration metric: took 11.164178ms to wait for pod list to return data ...
	I0719 19:34:47.582153   68617 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:34:47.588678   68617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:34:47.588713   68617 node_conditions.go:123] node cpu capacity is 2
	I0719 19:34:47.588732   68617 node_conditions.go:105] duration metric: took 6.572921ms to run NodePressure ...
	I0719 19:34:47.588751   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:47.869129   68617 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 19:34:47.875206   68617 kubeadm.go:739] kubelet initialised
	I0719 19:34:47.875228   68617 kubeadm.go:740] duration metric: took 6.075227ms waiting for restarted kubelet to initialise ...
	I0719 19:34:47.875239   68617 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:34:47.883454   68617 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:47.889101   68617 pod_ready.go:97] node "embed-certs-889918" hosting pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.889128   68617 pod_ready.go:81] duration metric: took 5.645208ms for pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:47.889149   68617 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-889918" hosting pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.889167   68617 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:47.894286   68617 pod_ready.go:97] node "embed-certs-889918" hosting pod "etcd-embed-certs-889918" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.894312   68617 pod_ready.go:81] duration metric: took 5.135667ms for pod "etcd-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:47.894321   68617 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-889918" hosting pod "etcd-embed-certs-889918" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.894327   68617 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:47.899401   68617 pod_ready.go:97] node "embed-certs-889918" hosting pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.899428   68617 pod_ready.go:81] duration metric: took 5.092368ms for pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:47.899435   68617 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-889918" hosting pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.899441   68617 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:47.976133   68617 pod_ready.go:97] node "embed-certs-889918" hosting pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.976170   68617 pod_ready.go:81] duration metric: took 76.718663ms for pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:47.976183   68617 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-889918" hosting pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.976193   68617 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5fcgs" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:48.374196   68617 pod_ready.go:97] node "embed-certs-889918" hosting pod "kube-proxy-5fcgs" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:48.374232   68617 pod_ready.go:81] duration metric: took 398.025945ms for pod "kube-proxy-5fcgs" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:48.374242   68617 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-889918" hosting pod "kube-proxy-5fcgs" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:48.374250   68617 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:48.574541   68617 pod_ready.go:97] error getting pod "kube-scheduler-embed-certs-889918" in "kube-system" namespace (skipping!): pods "kube-scheduler-embed-certs-889918" not found
	I0719 19:34:48.574575   68617 pod_ready.go:81] duration metric: took 200.319358ms for pod "kube-scheduler-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:48.574585   68617 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-embed-certs-889918" in "kube-system" namespace (skipping!): pods "kube-scheduler-embed-certs-889918" not found
	I0719 19:34:48.574591   68617 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:48.974614   68617 pod_ready.go:97] node "embed-certs-889918" hosting pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:48.974640   68617 pod_ready.go:81] duration metric: took 400.042539ms for pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:48.974648   68617 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-889918" hosting pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:48.974655   68617 pod_ready.go:38] duration metric: took 1.099407444s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:34:48.974672   68617 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 19:34:48.985044   68617 ops.go:34] apiserver oom_adj: -16
	I0719 19:34:48.985072   68617 kubeadm.go:597] duration metric: took 8.491446525s to restartPrimaryControlPlane
	I0719 19:34:48.985084   68617 kubeadm.go:394] duration metric: took 8.536160393s to StartCluster
	I0719 19:34:48.985117   68617 settings.go:142] acquiring lock: {Name:mkcf95271f2ac91a55c2f4ae0f8a0ccaa24777be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:34:48.985216   68617 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:34:48.986883   68617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/kubeconfig: {Name:mk0bbb5f0de0e816a151363ad4427bbf9de52f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:34:48.987110   68617 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 19:34:48.987179   68617 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 19:34:48.987252   68617 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-889918"
	I0719 19:34:48.987271   68617 addons.go:69] Setting default-storageclass=true in profile "embed-certs-889918"
	I0719 19:34:48.987287   68617 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-889918"
	W0719 19:34:48.987297   68617 addons.go:243] addon storage-provisioner should already be in state true
	I0719 19:34:48.987303   68617 addons.go:69] Setting metrics-server=true in profile "embed-certs-889918"
	I0719 19:34:48.987319   68617 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-889918"
	I0719 19:34:48.987331   68617 config.go:182] Loaded profile config "embed-certs-889918": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:34:48.987352   68617 host.go:66] Checking if "embed-certs-889918" exists ...
	I0719 19:34:48.987335   68617 addons.go:234] Setting addon metrics-server=true in "embed-certs-889918"
	W0719 19:34:48.987368   68617 addons.go:243] addon metrics-server should already be in state true
	I0719 19:34:48.987402   68617 host.go:66] Checking if "embed-certs-889918" exists ...
	I0719 19:34:48.987647   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:48.987684   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:48.987692   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:48.987701   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:48.987720   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:48.987723   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:48.988905   68617 out.go:177] * Verifying Kubernetes components...
	I0719 19:34:48.990196   68617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:34:49.003003   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38653
	I0719 19:34:49.003016   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46849
	I0719 19:34:49.003134   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38559
	I0719 19:34:49.003399   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.003454   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.003518   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.003947   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.003967   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.004007   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.004023   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.004100   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.004115   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.004295   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.004351   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.004444   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.004512   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetState
	I0719 19:34:49.004785   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:49.004819   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:49.005005   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:49.005037   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:49.008189   68617 addons.go:234] Setting addon default-storageclass=true in "embed-certs-889918"
	W0719 19:34:49.008212   68617 addons.go:243] addon default-storageclass should already be in state true
	I0719 19:34:49.008242   68617 host.go:66] Checking if "embed-certs-889918" exists ...
	I0719 19:34:49.008617   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:49.008649   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:49.020066   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38847
	I0719 19:34:49.020085   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40037
	I0719 19:34:49.020445   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.020502   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.020882   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.020899   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.021021   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.021049   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.021278   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.021313   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.021473   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetState
	I0719 19:34:49.021511   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetState
	I0719 19:34:49.023274   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:49.023416   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:49.025350   68617 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:34:49.025361   68617 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0719 19:34:49.025824   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32781
	I0719 19:34:49.026239   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.026584   68617 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 19:34:49.026603   68617 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 19:34:49.026621   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:49.026679   68617 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:34:49.026696   68617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 19:34:49.026712   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:49.027527   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.027544   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.027893   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.028407   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:49.028439   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:49.030553   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:49.030579   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:49.031224   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:49.031246   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:49.031274   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:49.031279   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:49.031298   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:49.031375   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:49.031477   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:49.031659   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:49.031717   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:49.031867   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:49.032146   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:49.032524   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:49.043850   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37515
	I0719 19:34:49.044198   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.044657   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.044679   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.045019   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.045222   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetState
	I0719 19:34:49.046751   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:49.046979   68617 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 19:34:49.046997   68617 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 19:34:49.047014   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:49.049718   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:49.050184   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:49.050210   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:49.050371   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:49.050520   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:49.050670   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:49.050852   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:49.167388   68617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:34:49.183694   68617 node_ready.go:35] waiting up to 6m0s for node "embed-certs-889918" to be "Ready" ...
	I0719 19:34:49.290895   68617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:34:49.298027   68617 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 19:34:49.298047   68617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0719 19:34:49.313881   68617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 19:34:49.319432   68617 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 19:34:49.319459   68617 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 19:34:49.357567   68617 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 19:34:49.357591   68617 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 19:34:49.385245   68617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 19:34:50.267101   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.267126   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.267145   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.267136   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.267209   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.267226   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.267407   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.267419   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.267427   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.267434   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.267506   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Closing plugin on server side
	I0719 19:34:50.267527   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Closing plugin on server side
	I0719 19:34:50.267546   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.267553   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.267561   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.267567   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.267576   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.267601   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.267619   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.267630   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.267630   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Closing plugin on server side
	I0719 19:34:50.267652   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.267660   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.267890   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Closing plugin on server side
	I0719 19:34:50.267920   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.267933   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.267945   68617 addons.go:475] Verifying addon metrics-server=true in "embed-certs-889918"
	I0719 19:34:50.268025   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Closing plugin on server side
	I0719 19:34:50.268046   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.268053   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.273372   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.273398   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.273607   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.273621   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.275547   68617 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0719 19:34:50.276813   68617 addons.go:510] duration metric: took 1.289644945s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0719 19:34:51.187125   68617 node_ready.go:53] node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:49.787569   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:49.788100   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:49.788136   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:49.788058   69904 retry.go:31] will retry after 4.206772692s: waiting for machine to come up
	I0719 19:34:50.233851   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:34:50.233900   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:55.468495   68019 start.go:364] duration metric: took 55.077199048s to acquireMachinesLock for "no-preload-971041"
	I0719 19:34:55.468549   68019 start.go:96] Skipping create...Using existing machine configuration
	I0719 19:34:55.468561   68019 fix.go:54] fixHost starting: 
	I0719 19:34:55.469011   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:55.469045   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:55.488854   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32977
	I0719 19:34:55.489237   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:55.489746   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:34:55.489769   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:55.490164   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:55.490353   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:34:55.490535   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetState
	I0719 19:34:55.492080   68019 fix.go:112] recreateIfNeeded on no-preload-971041: state=Stopped err=<nil>
	I0719 19:34:55.492100   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	W0719 19:34:55.492240   68019 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 19:34:55.494051   68019 out.go:177] * Restarting existing kvm2 VM for "no-preload-971041" ...
	I0719 19:34:53.189961   68617 node_ready.go:53] node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:55.688298   68617 node_ready.go:53] node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:53.999402   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.000105   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has current primary IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.000129   68995 main.go:141] libmachine: (old-k8s-version-570369) Found IP for machine: 192.168.39.220
	I0719 19:34:54.000144   68995 main.go:141] libmachine: (old-k8s-version-570369) Reserving static IP address...
	I0719 19:34:54.000535   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "old-k8s-version-570369", mac: "52:54:00:80:e3:2b", ip: "192.168.39.220"} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.000738   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | skip adding static IP to network mk-old-k8s-version-570369 - found existing host DHCP lease matching {name: "old-k8s-version-570369", mac: "52:54:00:80:e3:2b", ip: "192.168.39.220"}
	I0719 19:34:54.000786   68995 main.go:141] libmachine: (old-k8s-version-570369) Reserved static IP address: 192.168.39.220
	I0719 19:34:54.000799   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | Getting to WaitForSSH function...
	I0719 19:34:54.000819   68995 main.go:141] libmachine: (old-k8s-version-570369) Waiting for SSH to be available...
	I0719 19:34:54.003082   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.003489   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.003519   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.003700   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | Using SSH client type: external
	I0719 19:34:54.003733   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | Using SSH private key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa (-rw-------)
	I0719 19:34:54.003769   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.220 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 19:34:54.003786   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | About to run SSH command:
	I0719 19:34:54.003802   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | exit 0
	I0719 19:34:54.131809   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | SSH cmd err, output: <nil>: 
	I0719 19:34:54.132235   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetConfigRaw
	I0719 19:34:54.132946   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetIP
	I0719 19:34:54.135584   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.135977   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.136011   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.136256   68995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/config.json ...
	I0719 19:34:54.136443   68995 machine.go:94] provisionDockerMachine start ...
	I0719 19:34:54.136465   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:54.136815   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.139347   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.139705   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.139733   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.139951   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:54.140132   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.140272   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.140567   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:54.140754   68995 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:54.141006   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:34:54.141018   68995 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 19:34:54.251997   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 19:34:54.252035   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetMachineName
	I0719 19:34:54.252283   68995 buildroot.go:166] provisioning hostname "old-k8s-version-570369"
	I0719 19:34:54.252310   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetMachineName
	I0719 19:34:54.252568   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.255414   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.255812   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.255860   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.256041   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:54.256232   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.256399   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.256586   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:54.256793   68995 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:54.257004   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:34:54.257019   68995 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-570369 && echo "old-k8s-version-570369" | sudo tee /etc/hostname
	I0719 19:34:54.381363   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-570369
	
	I0719 19:34:54.381405   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.384406   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.384697   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.384724   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.384866   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:54.385050   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.385226   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.385399   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:54.385554   68995 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:54.385735   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:34:54.385752   68995 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-570369' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-570369/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-570369' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 19:34:54.504713   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:34:54.504740   68995 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 19:34:54.504778   68995 buildroot.go:174] setting up certificates
	I0719 19:34:54.504786   68995 provision.go:84] configureAuth start
	I0719 19:34:54.504796   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetMachineName
	I0719 19:34:54.505040   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetIP
	I0719 19:34:54.507978   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.508375   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.508407   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.508594   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.510609   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.510923   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.510952   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.511003   68995 provision.go:143] copyHostCerts
	I0719 19:34:54.511068   68995 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 19:34:54.511079   68995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 19:34:54.511147   68995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 19:34:54.511250   68995 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 19:34:54.511260   68995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 19:34:54.511290   68995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 19:34:54.511365   68995 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 19:34:54.511374   68995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 19:34:54.511404   68995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 19:34:54.511470   68995 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-570369 san=[127.0.0.1 192.168.39.220 localhost minikube old-k8s-version-570369]
	I0719 19:34:54.759014   68995 provision.go:177] copyRemoteCerts
	I0719 19:34:54.759064   68995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 19:34:54.759089   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.761770   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.762089   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.762133   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.762262   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:54.762483   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.762633   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:54.762744   68995 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa Username:docker}
	I0719 19:34:54.849819   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0719 19:34:54.874113   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 19:34:54.897979   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 19:34:54.921635   68995 provision.go:87] duration metric: took 416.834667ms to configureAuth
	I0719 19:34:54.921669   68995 buildroot.go:189] setting minikube options for container-runtime
	I0719 19:34:54.921904   68995 config.go:182] Loaded profile config "old-k8s-version-570369": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0719 19:34:54.922004   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.924798   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.925091   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.925121   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.925308   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:54.925611   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.925801   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.925980   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:54.926180   68995 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:54.926442   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:34:54.926471   68995 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 19:34:55.219746   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 19:34:55.219774   68995 machine.go:97] duration metric: took 1.083316626s to provisionDockerMachine
	I0719 19:34:55.219788   68995 start.go:293] postStartSetup for "old-k8s-version-570369" (driver="kvm2")
	I0719 19:34:55.219802   68995 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 19:34:55.219823   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:55.220207   68995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 19:34:55.220250   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:55.223470   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.223803   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:55.223847   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.224015   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:55.224253   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:55.224507   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:55.224686   68995 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa Username:docker}
	I0719 19:34:55.314430   68995 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 19:34:55.318435   68995 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 19:34:55.318461   68995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 19:34:55.318573   68995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 19:34:55.318696   68995 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 19:34:55.318818   68995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 19:34:55.329180   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:34:55.351452   68995 start.go:296] duration metric: took 131.650561ms for postStartSetup
	I0719 19:34:55.351493   68995 fix.go:56] duration metric: took 20.882796261s for fixHost
	I0719 19:34:55.351516   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:55.354069   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.354429   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:55.354449   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.354612   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:55.354786   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:55.354936   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:55.355083   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:55.355295   68995 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:55.355516   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:34:55.355528   68995 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 19:34:55.468304   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721417695.444458011
	
	I0719 19:34:55.468347   68995 fix.go:216] guest clock: 1721417695.444458011
	I0719 19:34:55.468357   68995 fix.go:229] Guest: 2024-07-19 19:34:55.444458011 +0000 UTC Remote: 2024-07-19 19:34:55.351498388 +0000 UTC m=+233.383860409 (delta=92.959623ms)
	I0719 19:34:55.468400   68995 fix.go:200] guest clock delta is within tolerance: 92.959623ms
	I0719 19:34:55.468407   68995 start.go:83] releasing machines lock for "old-k8s-version-570369", held for 20.999740081s
	I0719 19:34:55.468436   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:55.468696   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetIP
	I0719 19:34:55.471650   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.472087   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:55.472126   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.472376   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:55.472870   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:55.473063   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:55.473145   68995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 19:34:55.473190   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:55.473364   68995 ssh_runner.go:195] Run: cat /version.json
	I0719 19:34:55.473392   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:55.476321   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.476682   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.476716   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:55.476736   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.476847   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:55.477024   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:55.477123   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:55.477149   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.477183   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:55.477291   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:55.477341   68995 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa Username:docker}
	I0719 19:34:55.477440   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:55.477555   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:55.477729   68995 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa Username:docker}
	I0719 19:34:55.573427   68995 ssh_runner.go:195] Run: systemctl --version
	I0719 19:34:55.602019   68995 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 19:34:55.759356   68995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 19:34:55.766310   68995 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 19:34:55.766406   68995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 19:34:55.786237   68995 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 19:34:55.786261   68995 start.go:495] detecting cgroup driver to use...
	I0719 19:34:55.786335   68995 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 19:34:55.804512   68995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 19:34:55.818026   68995 docker.go:217] disabling cri-docker service (if available) ...
	I0719 19:34:55.818081   68995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 19:34:55.832737   68995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 19:34:55.848018   68995 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 19:34:55.972092   68995 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 19:34:56.119305   68995 docker.go:233] disabling docker service ...
	I0719 19:34:56.119390   68995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 19:34:56.132814   68995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 19:34:56.145281   68995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 19:34:56.285671   68995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 19:34:56.413786   68995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 19:34:56.428220   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 19:34:56.447626   68995 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0719 19:34:56.447679   68995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:56.458325   68995 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 19:34:56.458419   68995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:56.467977   68995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:56.477127   68995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:56.487342   68995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 19:34:56.499618   68995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 19:34:56.510085   68995 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 19:34:56.510149   68995 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 19:34:56.524045   68995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 19:34:56.533806   68995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:34:56.671207   68995 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 19:34:56.844623   68995 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 19:34:56.844688   68995 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 19:34:56.849762   68995 start.go:563] Will wait 60s for crictl version
	I0719 19:34:56.849829   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:34:56.853768   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 19:34:56.896644   68995 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 19:34:56.896768   68995 ssh_runner.go:195] Run: crio --version
	I0719 19:34:56.924343   68995 ssh_runner.go:195] Run: crio --version
	I0719 19:34:56.952939   68995 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0719 19:34:56.954228   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetIP
	I0719 19:34:56.957173   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:56.957539   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:56.957571   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:56.957784   68995 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 19:34:56.961877   68995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:34:56.973917   68995 kubeadm.go:883] updating cluster {Name:old-k8s-version-570369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-570369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 19:34:56.974058   68995 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 19:34:56.974286   68995 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:34:55.495295   68019 main.go:141] libmachine: (no-preload-971041) Calling .Start
	I0719 19:34:55.495475   68019 main.go:141] libmachine: (no-preload-971041) Ensuring networks are active...
	I0719 19:34:55.496223   68019 main.go:141] libmachine: (no-preload-971041) Ensuring network default is active
	I0719 19:34:55.496554   68019 main.go:141] libmachine: (no-preload-971041) Ensuring network mk-no-preload-971041 is active
	I0719 19:34:55.496971   68019 main.go:141] libmachine: (no-preload-971041) Getting domain xml...
	I0719 19:34:55.497761   68019 main.go:141] libmachine: (no-preload-971041) Creating domain...
	I0719 19:34:56.870332   68019 main.go:141] libmachine: (no-preload-971041) Waiting to get IP...
	I0719 19:34:56.871429   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:56.871920   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:56.871979   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:56.871888   70115 retry.go:31] will retry after 201.799148ms: waiting for machine to come up
	I0719 19:34:57.075216   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:57.075744   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:57.075774   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:57.075692   70115 retry.go:31] will retry after 311.790253ms: waiting for machine to come up
	I0719 19:34:57.389280   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:57.389811   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:57.389836   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:57.389752   70115 retry.go:31] will retry after 434.42849ms: waiting for machine to come up
	I0719 19:34:57.826106   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:57.826558   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:57.826591   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:57.826524   70115 retry.go:31] will retry after 416.820957ms: waiting for machine to come up
	I0719 19:34:55.234692   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:34:55.234729   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:56.690525   68617 node_ready.go:49] node "embed-certs-889918" has status "Ready":"True"
	I0719 19:34:56.690548   68617 node_ready.go:38] duration metric: took 7.506818962s for node "embed-certs-889918" to be "Ready" ...
	I0719 19:34:56.690557   68617 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:34:56.697237   68617 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:56.702754   68617 pod_ready.go:92] pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace has status "Ready":"True"
	I0719 19:34:56.702775   68617 pod_ready.go:81] duration metric: took 5.512702ms for pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:56.702784   68617 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:58.709004   68617 pod_ready.go:102] pod "etcd-embed-certs-889918" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:00.709279   68617 pod_ready.go:92] pod "etcd-embed-certs-889918" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:00.709305   68617 pod_ready.go:81] duration metric: took 4.00651453s for pod "etcd-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.709317   68617 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.715390   68617 pod_ready.go:92] pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:00.715416   68617 pod_ready.go:81] duration metric: took 6.089348ms for pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.715428   68617 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.720718   68617 pod_ready.go:92] pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:00.720754   68617 pod_ready.go:81] duration metric: took 5.316049ms for pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.720768   68617 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5fcgs" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.725631   68617 pod_ready.go:92] pod "kube-proxy-5fcgs" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:00.725660   68617 pod_ready.go:81] duration metric: took 4.884075ms for pod "kube-proxy-5fcgs" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.725671   68617 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:57.028362   68995 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 19:34:57.028462   68995 ssh_runner.go:195] Run: which lz4
	I0719 19:34:57.032617   68995 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 19:34:57.037102   68995 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 19:34:57.037142   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0719 19:34:58.509147   68995 crio.go:462] duration metric: took 1.476562763s to copy over tarball
	I0719 19:34:58.509223   68995 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 19:35:01.576920   68995 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.067667619s)
	I0719 19:35:01.576949   68995 crio.go:469] duration metric: took 3.067772722s to extract the tarball
	I0719 19:35:01.576959   68995 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 19:35:01.622394   68995 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:35:01.658103   68995 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 19:35:01.658135   68995 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 19:35:01.658233   68995 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:01.658257   68995 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:35:01.658262   68995 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:35:01.658236   68995 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0719 19:35:01.658235   68995 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0719 19:35:01.658233   68995 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:35:01.658247   68995 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:35:01.658433   68995 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0719 19:35:01.660336   68995 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0719 19:35:01.660361   68995 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:35:01.660343   68995 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:35:01.660333   68995 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:35:01.660339   68995 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0719 19:35:01.660340   68995 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:35:01.660333   68995 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:01.660714   68995 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0719 19:35:01.895704   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0719 19:35:01.907901   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:35:01.908071   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:35:01.923590   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0719 19:35:01.923831   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:35:01.955327   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:35:01.983003   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0719 19:35:00.235814   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:35:00.235880   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:35:00.893649   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:35:00.893683   68536 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:35:00.893697   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:35:00.916935   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:35:00.916965   68536 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:35:01.233380   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:35:01.242031   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:35:01.242062   68536 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:35:01.732567   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:35:01.737153   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:35:01.737201   68536 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:35:02.232697   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:35:02.238729   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:35:02.238763   68536 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:35:02.733474   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:35:02.738862   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 200:
	ok
	I0719 19:35:02.746708   68536 api_server.go:141] control plane version: v1.30.3
	I0719 19:35:02.746736   68536 api_server.go:131] duration metric: took 38.514241141s to wait for apiserver health ...
	I0719 19:35:02.746748   68536 cni.go:84] Creating CNI manager for ""
	I0719 19:35:02.746757   68536 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:34:58.245957   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:58.246426   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:58.246451   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:58.246364   70115 retry.go:31] will retry after 682.234308ms: waiting for machine to come up
	I0719 19:34:58.930013   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:58.930623   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:58.930642   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:58.930578   70115 retry.go:31] will retry after 798.20186ms: waiting for machine to come up
	I0719 19:34:59.730628   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:59.731191   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:59.731213   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:59.731139   70115 retry.go:31] will retry after 911.96578ms: waiting for machine to come up
	I0719 19:35:00.644805   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:00.645291   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:35:00.645326   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:35:00.645245   70115 retry.go:31] will retry after 1.252978155s: waiting for machine to come up
	I0719 19:35:01.900101   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:01.900824   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:35:01.900848   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:35:01.900727   70115 retry.go:31] will retry after 1.662309997s: waiting for machine to come up
	I0719 19:35:02.966644   68536 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 19:35:03.080565   68536 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 19:35:03.099289   68536 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 19:35:03.123741   68536 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:35:03.243633   68536 system_pods.go:59] 8 kube-system pods found
	I0719 19:35:03.243729   68536 system_pods.go:61] "coredns-7db6d8ff4d-tngnj" [e343f406-a7bc-46e8-986a-a55d3579c8d2] Running
	I0719 19:35:03.243751   68536 system_pods.go:61] "etcd-default-k8s-diff-port-578533" [f800aab8-97bf-4080-b2ec-bac5e070cd14] Running
	I0719 19:35:03.243767   68536 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-578533" [afe884c0-eee2-4c52-b01f-c106afb9a244] Running
	I0719 19:35:03.243795   68536 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-578533" [ecece9ad-d0ed-4cad-b01e-d7254df00dfa] Running
	I0719 19:35:03.243816   68536 system_pods.go:61] "kube-proxy-cdb66" [1daaaaf5-4c7d-4c37-86f7-b52d11621ffa] Running
	I0719 19:35:03.243850   68536 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-578533" [0359836d-2099-43f7-9dae-7c54755818d4] Running
	I0719 19:35:03.243874   68536 system_pods.go:61] "metrics-server-569cc877fc-wf82d" [8ae7709a-082d-4279-9d78-44940b2bdae4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:35:03.243889   68536 system_pods.go:61] "storage-provisioner" [767f30f2-20d0-4340-b70b-0570d42fa007] Running
	I0719 19:35:03.243909   68536 system_pods.go:74] duration metric: took 120.145691ms to wait for pod list to return data ...
	I0719 19:35:03.243942   68536 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:35:02.733443   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:04.734486   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:02.004356   68995 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0719 19:35:02.004410   68995 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0719 19:35:02.004451   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.076774   68995 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0719 19:35:02.076820   68995 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:35:02.076871   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.077195   68995 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0719 19:35:02.077230   68995 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:35:02.077274   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.094076   68995 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0719 19:35:02.094105   68995 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0719 19:35:02.094118   68995 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:35:02.094148   68995 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0719 19:35:02.094169   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.094186   68995 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:35:02.094156   68995 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0719 19:35:02.094233   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.094246   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.100381   68995 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0719 19:35:02.100411   68995 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0719 19:35:02.100416   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0719 19:35:02.100449   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.100545   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:35:02.100547   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:35:02.106653   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:35:02.106700   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0719 19:35:02.106781   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:35:02.214163   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0719 19:35:02.214249   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0719 19:35:02.216256   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0719 19:35:02.216348   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0719 19:35:02.247645   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0719 19:35:02.247723   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0719 19:35:02.247818   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0719 19:35:02.265494   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0719 19:35:02.465885   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:02.607013   68995 cache_images.go:92] duration metric: took 948.858667ms to LoadCachedImages
	W0719 19:35:02.607143   68995 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0719 19:35:02.607167   68995 kubeadm.go:934] updating node { 192.168.39.220 8443 v1.20.0 crio true true} ...
	I0719 19:35:02.607303   68995 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-570369 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 19:35:02.607399   68995 ssh_runner.go:195] Run: crio config
	I0719 19:35:02.661274   68995 cni.go:84] Creating CNI manager for ""
	I0719 19:35:02.661306   68995 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:35:02.661326   68995 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 19:35:02.661353   68995 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.220 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-570369 NodeName:old-k8s-version-570369 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.220"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.220 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0719 19:35:02.661525   68995 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.220
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-570369"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.220
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.220"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 19:35:02.661599   68995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0719 19:35:02.671792   68995 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 19:35:02.671878   68995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 19:35:02.683593   68995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0719 19:35:02.703448   68995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 19:35:02.723494   68995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0719 19:35:02.741880   68995 ssh_runner.go:195] Run: grep 192.168.39.220	control-plane.minikube.internal$ /etc/hosts
	I0719 19:35:02.745968   68995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.220	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:35:02.759015   68995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:35:02.894896   68995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:35:02.912243   68995 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369 for IP: 192.168.39.220
	I0719 19:35:02.912272   68995 certs.go:194] generating shared ca certs ...
	I0719 19:35:02.912295   68995 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:35:02.912468   68995 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 19:35:02.912520   68995 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 19:35:02.912532   68995 certs.go:256] generating profile certs ...
	I0719 19:35:02.912660   68995 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/client.key
	I0719 19:35:02.912732   68995 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.key.41bb110d
	I0719 19:35:02.912784   68995 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/proxy-client.key
	I0719 19:35:02.912919   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 19:35:02.912959   68995 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 19:35:02.912973   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 19:35:02.913006   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 19:35:02.913040   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 19:35:02.913070   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 19:35:02.913125   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:35:02.914001   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 19:35:02.962668   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 19:35:02.997316   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 19:35:03.033868   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 19:35:03.073913   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0719 19:35:03.120607   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 19:35:03.156647   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 19:35:03.196763   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 19:35:03.233170   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 19:35:03.260949   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 19:35:03.286207   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 19:35:03.312158   68995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 19:35:03.333544   68995 ssh_runner.go:195] Run: openssl version
	I0719 19:35:03.339422   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 19:35:03.353775   68995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 19:35:03.359494   68995 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 19:35:03.359597   68995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 19:35:03.365926   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 19:35:03.380164   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 19:35:03.394069   68995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 19:35:03.398358   68995 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 19:35:03.398410   68995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 19:35:03.404158   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 19:35:03.415159   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 19:35:03.425887   68995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:35:03.430584   68995 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:35:03.430641   68995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:35:03.436201   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 19:35:03.448497   68995 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 19:35:03.454026   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 19:35:03.461326   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 19:35:03.467492   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 19:35:03.473959   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 19:35:03.480337   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 19:35:03.487652   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 19:35:03.494941   68995 kubeadm.go:392] StartCluster: {Name:old-k8s-version-570369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-570369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:35:03.495049   68995 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 19:35:03.495095   68995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:35:03.539527   68995 cri.go:89] found id: ""
	I0719 19:35:03.539603   68995 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 19:35:03.550212   68995 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 19:35:03.550232   68995 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 19:35:03.550290   68995 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 19:35:03.562250   68995 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 19:35:03.563624   68995 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-570369" does not appear in /home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:35:03.564665   68995 kubeconfig.go:62] /home/jenkins/minikube-integration/19307-14841/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-570369" cluster setting kubeconfig missing "old-k8s-version-570369" context setting]
	I0719 19:35:03.566027   68995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/kubeconfig: {Name:mk0bbb5f0de0e816a151363ad4427bbf9de52f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:35:03.639264   68995 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 19:35:03.651651   68995 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.220
	I0719 19:35:03.651684   68995 kubeadm.go:1160] stopping kube-system containers ...
	I0719 19:35:03.651698   68995 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 19:35:03.651753   68995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:35:03.694507   68995 cri.go:89] found id: ""
	I0719 19:35:03.694575   68995 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 19:35:03.713478   68995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:35:03.724717   68995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:35:03.724744   68995 kubeadm.go:157] found existing configuration files:
	
	I0719 19:35:03.724803   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:35:03.736032   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:35:03.736099   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:35:03.746972   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:35:03.757246   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:35:03.757384   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:35:03.766980   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:35:03.776428   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:35:03.776493   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:35:03.788238   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:35:03.799163   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:35:03.799230   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:35:03.810607   68995 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:35:03.820069   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:04.035635   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:04.678453   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:04.924967   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:05.021723   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:05.118353   68995 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:35:05.118439   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:05.618699   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:06.119206   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:06.618828   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:03.565550   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:03.566079   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:35:03.566106   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:35:03.566042   70115 retry.go:31] will retry after 1.551544013s: waiting for machine to come up
	I0719 19:35:05.119602   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:05.120126   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:35:05.120147   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:35:05.120073   70115 retry.go:31] will retry after 2.205398899s: waiting for machine to come up
	I0719 19:35:07.327911   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:07.328428   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:35:07.328451   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:35:07.328390   70115 retry.go:31] will retry after 3.255187102s: waiting for machine to come up
	I0719 19:35:03.641957   68536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:35:03.641987   68536 node_conditions.go:123] node cpu capacity is 2
	I0719 19:35:03.642001   68536 node_conditions.go:105] duration metric: took 398.042486ms to run NodePressure ...
	I0719 19:35:03.642020   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:04.658145   68536 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.016096931s)
	I0719 19:35:04.658182   68536 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 19:35:04.664125   68536 retry.go:31] will retry after 336.601067ms: kubelet not initialised
	I0719 19:35:05.007823   68536 retry.go:31] will retry after 366.441912ms: kubelet not initialised
	I0719 19:35:05.379510   68536 retry.go:31] will retry after 463.132181ms: kubelet not initialised
	I0719 19:35:05.848603   68536 retry.go:31] will retry after 1.07509848s: kubelet not initialised
	I0719 19:35:06.931674   68536 kubeadm.go:739] kubelet initialised
	I0719 19:35:06.931699   68536 kubeadm.go:740] duration metric: took 2.273506864s waiting for restarted kubelet to initialise ...
	I0719 19:35:06.931706   68536 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:35:06.941748   68536 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-tngnj" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:07.231890   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:09.732362   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:07.119022   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:07.619075   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:08.119512   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:08.618875   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:09.118565   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:09.619032   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:10.118774   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:10.618802   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:11.119561   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:11.618914   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:10.585455   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:10.585919   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:35:10.585946   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:35:10.585867   70115 retry.go:31] will retry after 3.911827583s: waiting for machine to come up
	I0719 19:35:08.947740   68536 pod_ready.go:102] pod "coredns-7db6d8ff4d-tngnj" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:10.959978   68536 pod_ready.go:102] pod "coredns-7db6d8ff4d-tngnj" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:12.449422   68536 pod_ready.go:92] pod "coredns-7db6d8ff4d-tngnj" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:12.449445   68536 pod_ready.go:81] duration metric: took 5.507672716s for pod "coredns-7db6d8ff4d-tngnj" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:12.449459   68536 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:12.455485   68536 pod_ready.go:92] pod "etcd-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:12.455513   68536 pod_ready.go:81] duration metric: took 6.046494ms for pod "etcd-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:12.455524   68536 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:12.461238   68536 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:12.461259   68536 pod_ready.go:81] duration metric: took 5.727166ms for pod "kube-apiserver-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:12.461268   68536 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:12.230775   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:14.232074   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:16.232335   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:12.119049   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:12.619110   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:13.119295   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:13.619413   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:14.118909   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:14.619324   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:15.118964   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:15.618534   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:16.119209   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:16.619370   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:14.500920   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.501377   68019 main.go:141] libmachine: (no-preload-971041) Found IP for machine: 192.168.50.119
	I0719 19:35:14.501409   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has current primary IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.501420   68019 main.go:141] libmachine: (no-preload-971041) Reserving static IP address...
	I0719 19:35:14.501837   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "no-preload-971041", mac: "52:54:00:72:b8:bd", ip: "192.168.50.119"} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.501869   68019 main.go:141] libmachine: (no-preload-971041) Reserved static IP address: 192.168.50.119
	I0719 19:35:14.501888   68019 main.go:141] libmachine: (no-preload-971041) DBG | skip adding static IP to network mk-no-preload-971041 - found existing host DHCP lease matching {name: "no-preload-971041", mac: "52:54:00:72:b8:bd", ip: "192.168.50.119"}
	I0719 19:35:14.501907   68019 main.go:141] libmachine: (no-preload-971041) DBG | Getting to WaitForSSH function...
	I0719 19:35:14.501923   68019 main.go:141] libmachine: (no-preload-971041) Waiting for SSH to be available...
	I0719 19:35:14.504106   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.504417   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.504451   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.504563   68019 main.go:141] libmachine: (no-preload-971041) DBG | Using SSH client type: external
	I0719 19:35:14.504609   68019 main.go:141] libmachine: (no-preload-971041) DBG | Using SSH private key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa (-rw-------)
	I0719 19:35:14.504650   68019 main.go:141] libmachine: (no-preload-971041) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.119 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 19:35:14.504691   68019 main.go:141] libmachine: (no-preload-971041) DBG | About to run SSH command:
	I0719 19:35:14.504710   68019 main.go:141] libmachine: (no-preload-971041) DBG | exit 0
	I0719 19:35:14.624083   68019 main.go:141] libmachine: (no-preload-971041) DBG | SSH cmd err, output: <nil>: 
	I0719 19:35:14.624404   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetConfigRaw
	I0719 19:35:14.625152   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetIP
	I0719 19:35:14.628074   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.628431   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.628460   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.628756   68019 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/config.json ...
	I0719 19:35:14.628949   68019 machine.go:94] provisionDockerMachine start ...
	I0719 19:35:14.628967   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:35:14.629217   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:14.631862   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.632252   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.632280   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.632505   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:14.632703   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:14.632878   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:14.633052   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:14.633256   68019 main.go:141] libmachine: Using SSH client type: native
	I0719 19:35:14.633484   68019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.119 22 <nil> <nil>}
	I0719 19:35:14.633500   68019 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 19:35:14.732493   68019 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 19:35:14.732533   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetMachineName
	I0719 19:35:14.732918   68019 buildroot.go:166] provisioning hostname "no-preload-971041"
	I0719 19:35:14.732943   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetMachineName
	I0719 19:35:14.733124   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:14.735603   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.735962   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.735998   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.736203   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:14.736432   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:14.736617   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:14.736847   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:14.737046   68019 main.go:141] libmachine: Using SSH client type: native
	I0719 19:35:14.737264   68019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.119 22 <nil> <nil>}
	I0719 19:35:14.737280   68019 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-971041 && echo "no-preload-971041" | sudo tee /etc/hostname
	I0719 19:35:14.850040   68019 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-971041
	
	I0719 19:35:14.850066   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:14.853200   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.853584   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.853618   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.854242   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:14.854505   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:14.854720   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:14.854889   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:14.855089   68019 main.go:141] libmachine: Using SSH client type: native
	I0719 19:35:14.855321   68019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.119 22 <nil> <nil>}
	I0719 19:35:14.855344   68019 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-971041' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-971041/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-971041' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 19:35:14.959910   68019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:35:14.959942   68019 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 19:35:14.959969   68019 buildroot.go:174] setting up certificates
	I0719 19:35:14.959985   68019 provision.go:84] configureAuth start
	I0719 19:35:14.959997   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetMachineName
	I0719 19:35:14.960252   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetIP
	I0719 19:35:14.963262   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.963701   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.963732   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.963997   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:14.966731   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.967155   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.967176   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.967343   68019 provision.go:143] copyHostCerts
	I0719 19:35:14.967418   68019 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 19:35:14.967435   68019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 19:35:14.967506   68019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 19:35:14.967642   68019 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 19:35:14.967657   68019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 19:35:14.967696   68019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 19:35:14.967783   68019 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 19:35:14.967796   68019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 19:35:14.967826   68019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 19:35:14.967924   68019 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.no-preload-971041 san=[127.0.0.1 192.168.50.119 localhost minikube no-preload-971041]
	I0719 19:35:15.077149   68019 provision.go:177] copyRemoteCerts
	I0719 19:35:15.077207   68019 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 19:35:15.077229   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:15.079901   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.080234   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.080265   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.080489   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:15.080694   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.080897   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:15.081069   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:35:15.161684   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 19:35:15.186551   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0719 19:35:15.208816   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 19:35:15.233238   68019 provision.go:87] duration metric: took 273.242567ms to configureAuth
	I0719 19:35:15.233264   68019 buildroot.go:189] setting minikube options for container-runtime
	I0719 19:35:15.233500   68019 config.go:182] Loaded profile config "no-preload-971041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 19:35:15.233593   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:15.236166   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.236575   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.236621   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.236816   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:15.237047   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.237204   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.237393   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:15.237538   68019 main.go:141] libmachine: Using SSH client type: native
	I0719 19:35:15.237727   68019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.119 22 <nil> <nil>}
	I0719 19:35:15.237752   68019 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 19:35:15.489927   68019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 19:35:15.489955   68019 machine.go:97] duration metric: took 860.992755ms to provisionDockerMachine
	I0719 19:35:15.489968   68019 start.go:293] postStartSetup for "no-preload-971041" (driver="kvm2")
	I0719 19:35:15.489983   68019 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 19:35:15.490011   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:35:15.490360   68019 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 19:35:15.490390   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:15.493498   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.493895   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.493921   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.494033   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:15.494190   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.494355   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:15.494522   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:35:15.575024   68019 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 19:35:15.579078   68019 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 19:35:15.579102   68019 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 19:35:15.579180   68019 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 19:35:15.579269   68019 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 19:35:15.579483   68019 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 19:35:15.589240   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:35:15.612071   68019 start.go:296] duration metric: took 122.087671ms for postStartSetup
	I0719 19:35:15.612115   68019 fix.go:56] duration metric: took 20.143554276s for fixHost
	I0719 19:35:15.612139   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:15.614679   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.614969   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.614994   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.615141   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:15.615362   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.615517   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.615645   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:15.615872   68019 main.go:141] libmachine: Using SSH client type: native
	I0719 19:35:15.616088   68019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.119 22 <nil> <nil>}
	I0719 19:35:15.616103   68019 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 19:35:15.712393   68019 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721417715.673203772
	
	I0719 19:35:15.712417   68019 fix.go:216] guest clock: 1721417715.673203772
	I0719 19:35:15.712430   68019 fix.go:229] Guest: 2024-07-19 19:35:15.673203772 +0000 UTC Remote: 2024-07-19 19:35:15.612120403 +0000 UTC m=+357.807790755 (delta=61.083369ms)
	I0719 19:35:15.712454   68019 fix.go:200] guest clock delta is within tolerance: 61.083369ms
	I0719 19:35:15.712468   68019 start.go:83] releasing machines lock for "no-preload-971041", held for 20.243943716s
	I0719 19:35:15.712492   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:35:15.712798   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetIP
	I0719 19:35:15.715651   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.716143   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.716167   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.716435   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:35:15.717039   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:35:15.717281   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:35:15.717381   68019 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 19:35:15.717428   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:15.717554   68019 ssh_runner.go:195] Run: cat /version.json
	I0719 19:35:15.717588   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:15.720120   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.720461   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.720489   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.720510   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.720731   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:15.720860   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.720891   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.720912   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.721167   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:15.721179   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:15.721365   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:35:15.721419   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.721559   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:15.721711   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:35:15.834816   68019 ssh_runner.go:195] Run: systemctl --version
	I0719 19:35:15.841452   68019 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 19:35:15.982250   68019 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 19:35:15.989060   68019 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 19:35:15.989135   68019 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 19:35:16.003901   68019 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 19:35:16.003926   68019 start.go:495] detecting cgroup driver to use...
	I0719 19:35:16.003995   68019 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 19:35:16.023680   68019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 19:35:16.038740   68019 docker.go:217] disabling cri-docker service (if available) ...
	I0719 19:35:16.038818   68019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 19:35:16.053907   68019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 19:35:16.067484   68019 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 19:35:16.186006   68019 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 19:35:16.339983   68019 docker.go:233] disabling docker service ...
	I0719 19:35:16.340044   68019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 19:35:16.353948   68019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 19:35:16.367479   68019 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 19:35:16.513192   68019 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 19:35:16.642277   68019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 19:35:16.656031   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 19:35:16.674529   68019 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0719 19:35:16.674606   68019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.684907   68019 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 19:35:16.684974   68019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.695542   68019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.706406   68019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.716477   68019 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 19:35:16.726989   68019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.736973   68019 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.753617   68019 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.764022   68019 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 19:35:16.772871   68019 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 19:35:16.772919   68019 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 19:35:16.785870   68019 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 19:35:16.794887   68019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:35:16.942160   68019 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 19:35:17.084983   68019 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 19:35:17.085048   68019 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 19:35:17.089799   68019 start.go:563] Will wait 60s for crictl version
	I0719 19:35:17.089866   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.093813   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 19:35:17.130312   68019 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 19:35:17.130390   68019 ssh_runner.go:195] Run: crio --version
	I0719 19:35:17.157666   68019 ssh_runner.go:195] Run: crio --version
	I0719 19:35:17.186672   68019 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0719 19:35:17.188084   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetIP
	I0719 19:35:17.190975   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:17.191340   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:17.191369   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:17.191561   68019 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0719 19:35:17.195580   68019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:35:17.207427   68019 kubeadm.go:883] updating cluster {Name:no-preload-971041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-971041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.119 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 19:35:17.207577   68019 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0719 19:35:17.207631   68019 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:35:17.243262   68019 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0719 19:35:17.243288   68019 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 19:35:17.243386   68019 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:17.243412   68019 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 19:35:17.243417   68019 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0719 19:35:17.243423   68019 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 19:35:17.243448   68019 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 19:35:17.243394   68019 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 19:35:17.243395   68019 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0719 19:35:17.243493   68019 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 19:35:17.245104   68019 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0719 19:35:17.245131   68019 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0719 19:35:17.245106   68019 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 19:35:17.245116   68019 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:17.245179   68019 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 19:35:17.245186   68019 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 19:35:17.245375   68019 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 19:35:17.245546   68019 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 19:35:17.462701   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 19:35:17.480663   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0719 19:35:17.488510   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 19:35:17.491593   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 19:35:17.501505   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0719 19:35:17.502452   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 19:35:17.506436   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0719 19:35:17.522704   68019 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0719 19:35:17.522760   68019 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 19:35:17.522811   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.659364   68019 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0719 19:35:17.659399   68019 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 19:35:17.659429   68019 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0719 19:35:17.659451   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.659462   68019 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 19:35:17.659479   68019 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0719 19:35:17.659515   68019 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 19:35:17.659519   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.659550   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.659591   68019 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0719 19:35:17.659549   68019 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0719 19:35:17.659619   68019 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0719 19:35:17.659627   68019 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 19:35:17.659631   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 19:35:17.659647   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.659658   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.670058   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0719 19:35:17.670089   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0719 19:35:17.671989   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 19:35:17.672178   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 19:35:17.672292   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 19:35:17.738069   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0719 19:35:17.738169   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0719 19:35:17.789982   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0719 19:35:17.790070   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0719 19:35:17.790088   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0719 19:35:17.790163   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0719 19:35:17.790104   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0719 19:35:17.790187   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0719 19:35:17.791690   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0719 19:35:17.791731   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0719 19:35:17.791748   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0719 19:35:17.791761   68019 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0719 19:35:17.791777   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0719 19:35:17.791798   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0719 19:35:17.791810   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0719 19:35:17.799893   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0719 19:35:17.800102   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0719 19:35:17.800410   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0719 19:35:14.467021   68536 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:16.467879   68536 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:18.234926   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:20.733492   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:17.119366   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:17.619359   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:18.118510   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:18.618899   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:19.119102   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:19.619423   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:20.119089   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:20.618533   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:21.118809   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:21.618616   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:18.089182   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:19.785577   68019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.993744788s)
	I0719 19:35:19.785607   68019 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.99381007s)
	I0719 19:35:19.785615   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0719 19:35:19.785628   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0719 19:35:19.785634   68019 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0719 19:35:19.785654   68019 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.993829277s)
	I0719 19:35:19.785664   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0719 19:35:19.785684   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0719 19:35:19.785741   68019 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.696527431s)
	I0719 19:35:19.785825   68019 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0719 19:35:19.785866   68019 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:19.785910   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:19.790112   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:22.178743   68019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.393030453s)
	I0719 19:35:22.178773   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0719 19:35:22.178783   68019 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0719 19:35:22.178800   68019 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.388665972s)
	I0719 19:35:22.178830   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0719 19:35:22.178848   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0719 19:35:22.178920   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0719 19:35:22.183029   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0719 19:35:18.968128   68536 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:21.467026   68536 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:21.467056   68536 pod_ready.go:81] duration metric: took 9.005780978s for pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:21.467069   68536 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cdb66" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:21.472259   68536 pod_ready.go:92] pod "kube-proxy-cdb66" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:21.472282   68536 pod_ready.go:81] duration metric: took 5.205132ms for pod "kube-proxy-cdb66" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:21.472299   68536 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:21.476400   68536 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:21.476420   68536 pod_ready.go:81] duration metric: took 4.112093ms for pod "kube-scheduler-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:21.476431   68536 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:22.733565   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:25.232147   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:22.119363   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:22.618892   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:23.118639   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:23.619386   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:24.118543   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:24.618558   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:25.118612   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:25.619231   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:26.118601   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:26.618755   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:25.462621   68019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.283764696s)
	I0719 19:35:25.462658   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0719 19:35:25.462676   68019 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0719 19:35:25.462728   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0719 19:35:26.824398   68019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.361644989s)
	I0719 19:35:26.824432   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0719 19:35:26.824443   68019 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0719 19:35:26.824492   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0719 19:35:23.484653   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:25.984229   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:27.233184   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:29.732740   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:27.118683   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:27.618797   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:28.118832   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:28.618508   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:29.118591   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:29.618542   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:30.118979   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:30.619506   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:31.118721   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:31.619394   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:28.791678   68019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.967160098s)
	I0719 19:35:28.791704   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0719 19:35:28.791716   68019 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0719 19:35:28.791769   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0719 19:35:30.745676   68019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.95388111s)
	I0719 19:35:30.745709   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0719 19:35:30.745721   68019 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0719 19:35:30.745769   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0719 19:35:31.391036   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0719 19:35:31.391080   68019 cache_images.go:123] Successfully loaded all cached images
	I0719 19:35:31.391085   68019 cache_images.go:92] duration metric: took 14.147757536s to LoadCachedImages
	I0719 19:35:31.391098   68019 kubeadm.go:934] updating node { 192.168.50.119 8443 v1.31.0-beta.0 crio true true} ...
	I0719 19:35:31.391207   68019 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-971041 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.119
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-971041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 19:35:31.391282   68019 ssh_runner.go:195] Run: crio config
	I0719 19:35:31.434562   68019 cni.go:84] Creating CNI manager for ""
	I0719 19:35:31.434584   68019 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:35:31.434594   68019 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 19:35:31.434617   68019 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.119 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-971041 NodeName:no-preload-971041 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.119"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.119 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 19:35:31.434745   68019 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.119
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-971041"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.119
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.119"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 19:35:31.434803   68019 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0719 19:35:31.446259   68019 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 19:35:31.446337   68019 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 19:35:31.456298   68019 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0719 19:35:31.473270   68019 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0719 19:35:31.490353   68019 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0719 19:35:31.506996   68019 ssh_runner.go:195] Run: grep 192.168.50.119	control-plane.minikube.internal$ /etc/hosts
	I0719 19:35:31.510538   68019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.119	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:35:31.522076   68019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:35:31.638008   68019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:35:31.654421   68019 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041 for IP: 192.168.50.119
	I0719 19:35:31.654443   68019 certs.go:194] generating shared ca certs ...
	I0719 19:35:31.654458   68019 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:35:31.654637   68019 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 19:35:31.654703   68019 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 19:35:31.654732   68019 certs.go:256] generating profile certs ...
	I0719 19:35:31.654844   68019 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/client.key
	I0719 19:35:31.654929   68019 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/apiserver.key.77bb0cd9
	I0719 19:35:31.654978   68019 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/proxy-client.key
	I0719 19:35:31.655112   68019 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 19:35:31.655148   68019 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 19:35:31.655161   68019 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 19:35:31.655204   68019 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 19:35:31.655234   68019 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 19:35:31.655274   68019 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 19:35:31.655325   68019 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:35:31.656172   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 19:35:31.683885   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 19:35:31.714681   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 19:35:31.742470   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 19:35:31.770348   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0719 19:35:31.804982   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 19:35:31.830750   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 19:35:31.857826   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 19:35:31.880913   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 19:35:31.903475   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 19:35:31.925060   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 19:35:31.948414   68019 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 19:35:31.964203   68019 ssh_runner.go:195] Run: openssl version
	I0719 19:35:31.969774   68019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 19:35:31.981478   68019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 19:35:31.986937   68019 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 19:35:31.987002   68019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 19:35:31.993471   68019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 19:35:32.004689   68019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 19:35:32.015199   68019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 19:35:32.019475   68019 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 19:35:32.019514   68019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 19:35:32.024962   68019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 19:35:32.034782   68019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 19:35:32.044867   68019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:35:32.048801   68019 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:35:32.048849   68019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:35:32.054461   68019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 19:35:32.064884   68019 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 19:35:32.069053   68019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 19:35:32.074547   68019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 19:35:32.080411   68019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 19:35:32.085765   68019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 19:35:32.090933   68019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 19:35:32.096183   68019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 19:35:32.101377   68019 kubeadm.go:392] StartCluster: {Name:no-preload-971041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-971041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.119 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:35:32.101485   68019 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 19:35:32.101523   68019 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:35:32.139740   68019 cri.go:89] found id: ""
	I0719 19:35:32.139796   68019 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 19:35:32.149576   68019 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 19:35:32.149595   68019 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 19:35:32.149640   68019 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 19:35:32.159486   68019 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 19:35:32.160857   68019 kubeconfig.go:125] found "no-preload-971041" server: "https://192.168.50.119:8443"
	I0719 19:35:32.163357   68019 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 19:35:32.173671   68019 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.119
	I0719 19:35:32.173710   68019 kubeadm.go:1160] stopping kube-system containers ...
	I0719 19:35:32.173723   68019 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 19:35:32.173767   68019 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:35:32.207095   68019 cri.go:89] found id: ""
	I0719 19:35:32.207172   68019 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 19:35:32.224068   68019 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:35:32.233552   68019 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:35:32.233571   68019 kubeadm.go:157] found existing configuration files:
	
	I0719 19:35:32.233623   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:35:32.241920   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:35:32.241971   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:35:32.250594   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:35:32.258910   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:35:32.258961   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:35:32.273812   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:35:32.282220   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:35:32.282277   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:35:32.290961   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:35:32.299396   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:35:32.299498   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:35:32.308343   68019 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:35:32.317592   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:32.421587   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:28.482817   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:30.982904   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:32.983517   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:32.231730   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:34.232127   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:32.118495   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:32.618922   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:33.119055   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:33.619189   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:34.118644   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:34.618987   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:35.119225   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:35.619447   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:36.118778   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:36.619327   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:33.447745   68019 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.026128232s)
	I0719 19:35:33.447770   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:33.662089   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:33.725683   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:33.792637   68019 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:35:33.792728   68019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:34.292798   68019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:34.793080   68019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:34.807168   68019 api_server.go:72] duration metric: took 1.014530984s to wait for apiserver process to appear ...
	I0719 19:35:34.807196   68019 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:35:34.807224   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:35:34.807771   68019 api_server.go:269] stopped: https://192.168.50.119:8443/healthz: Get "https://192.168.50.119:8443/healthz": dial tcp 192.168.50.119:8443: connect: connection refused
	I0719 19:35:35.308340   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:35:37.476017   68019 api_server.go:279] https://192.168.50.119:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:35:37.476054   68019 api_server.go:103] status: https://192.168.50.119:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:35:37.476070   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:35:37.518367   68019 api_server.go:279] https://192.168.50.119:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:35:37.518394   68019 api_server.go:103] status: https://192.168.50.119:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:35:37.807881   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:35:37.817233   68019 api_server.go:279] https://192.168.50.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:35:37.817256   68019 api_server.go:103] status: https://192.168.50.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:35:34.984368   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:37.482412   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:38.307564   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:35:38.312884   68019 api_server.go:279] https://192.168.50.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:35:38.312920   68019 api_server.go:103] status: https://192.168.50.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:35:38.807444   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:35:38.811634   68019 api_server.go:279] https://192.168.50.119:8443/healthz returned 200:
	ok
	I0719 19:35:38.818446   68019 api_server.go:141] control plane version: v1.31.0-beta.0
	I0719 19:35:38.818473   68019 api_server.go:131] duration metric: took 4.011270764s to wait for apiserver health ...
	I0719 19:35:38.818484   68019 cni.go:84] Creating CNI manager for ""
	I0719 19:35:38.818493   68019 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:35:38.820420   68019 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 19:35:36.733194   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:38.734341   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:41.232296   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:37.118504   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:37.619038   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:38.118793   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:38.619025   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:39.118668   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:39.619156   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:40.118595   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:40.618638   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:41.119418   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:41.618806   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:38.821795   68019 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 19:35:38.833825   68019 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 19:35:38.855799   68019 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:35:38.866571   68019 system_pods.go:59] 8 kube-system pods found
	I0719 19:35:38.866614   68019 system_pods.go:61] "coredns-5cfdc65f69-dzgdb" [1aca7daa-f2b9-41f9-ab4f-f6dcdf5a5655] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 19:35:38.866623   68019 system_pods.go:61] "etcd-no-preload-971041" [a2001218-3485-47a4-b7ba-bc5c41227b84] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 19:35:38.866632   68019 system_pods.go:61] "kube-apiserver-no-preload-971041" [f1a82c83-f8fc-44da-ba9c-558422a050cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 19:35:38.866641   68019 system_pods.go:61] "kube-controller-manager-no-preload-971041" [d760c600-3d53-436f-80b1-43f97c8a1d97] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 19:35:38.866649   68019 system_pods.go:61] "kube-proxy-xnwc9" [f709e007-bc70-4aed-a66c-c7a8218df4c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0719 19:35:38.866661   68019 system_pods.go:61] "kube-scheduler-no-preload-971041" [0751f461-931d-4ae9-bba2-393487903f77] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0719 19:35:38.866671   68019 system_pods.go:61] "metrics-server-78fcd8795b-j78vm" [dd7ef76a-38b2-4660-88ce-a4dbe6bdd43e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:35:38.866683   68019 system_pods.go:61] "storage-provisioner" [68333fba-f647-4ddc-81a1-c514426efb1c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 19:35:38.866690   68019 system_pods.go:74] duration metric: took 10.871899ms to wait for pod list to return data ...
	I0719 19:35:38.866700   68019 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:35:38.871059   68019 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:35:38.871085   68019 node_conditions.go:123] node cpu capacity is 2
	I0719 19:35:38.871096   68019 node_conditions.go:105] duration metric: took 4.388431ms to run NodePressure ...
	I0719 19:35:38.871117   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:39.202189   68019 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 19:35:39.208833   68019 kubeadm.go:739] kubelet initialised
	I0719 19:35:39.208852   68019 kubeadm.go:740] duration metric: took 6.63993ms waiting for restarted kubelet to initialise ...
	I0719 19:35:39.208860   68019 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:35:39.213105   68019 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-dzgdb" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:41.221061   68019 pod_ready.go:102] pod "coredns-5cfdc65f69-dzgdb" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:39.482629   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:41.484400   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:43.732393   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:46.230839   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:42.118791   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:42.619397   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:43.119214   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:43.619520   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:44.118776   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:44.618539   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:45.118556   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:45.619338   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:46.119336   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:46.618522   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:43.221724   68019 pod_ready.go:102] pod "coredns-5cfdc65f69-dzgdb" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:44.720005   68019 pod_ready.go:92] pod "coredns-5cfdc65f69-dzgdb" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:44.720027   68019 pod_ready.go:81] duration metric: took 5.506899416s for pod "coredns-5cfdc65f69-dzgdb" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:44.720036   68019 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:46.725895   68019 pod_ready.go:102] pod "etcd-no-preload-971041" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:43.983502   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:46.482945   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:48.232409   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:50.232711   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:47.119180   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:47.618740   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:48.118755   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:48.618715   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:49.118497   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:49.619062   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:50.118758   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:50.618820   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:51.119238   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:51.618777   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:48.726634   68019 pod_ready.go:102] pod "etcd-no-preload-971041" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:50.726838   68019 pod_ready.go:102] pod "etcd-no-preload-971041" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:51.727136   68019 pod_ready.go:92] pod "etcd-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:51.727157   68019 pod_ready.go:81] duration metric: took 7.007114402s for pod "etcd-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.727166   68019 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.731799   68019 pod_ready.go:92] pod "kube-apiserver-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:51.731826   68019 pod_ready.go:81] duration metric: took 4.653065ms for pod "kube-apiserver-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.731868   68019 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.736589   68019 pod_ready.go:92] pod "kube-controller-manager-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:51.736609   68019 pod_ready.go:81] duration metric: took 4.730944ms for pod "kube-controller-manager-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.736622   68019 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xnwc9" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.740944   68019 pod_ready.go:92] pod "kube-proxy-xnwc9" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:51.740969   68019 pod_ready.go:81] duration metric: took 4.339031ms for pod "kube-proxy-xnwc9" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.740990   68019 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.744978   68019 pod_ready.go:92] pod "kube-scheduler-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:51.744996   68019 pod_ready.go:81] duration metric: took 3.998956ms for pod "kube-scheduler-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.745004   68019 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:48.983216   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:50.983267   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:52.234034   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:54.234538   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:52.119537   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:52.618848   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:53.118706   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:53.618985   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:54.119456   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:54.618507   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:55.118511   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:55.618721   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:56.118586   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:56.618540   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:53.751346   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:56.252649   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:53.482767   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:55.982603   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:56.732054   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:59.231910   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:57.118852   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:57.619296   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:58.118571   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:58.619133   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:59.119529   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:59.618552   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:00.118563   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:00.618818   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:01.118545   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:01.619292   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:58.751344   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:00.751687   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:58.482346   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:00.983025   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:02.983499   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:01.731466   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:03.732078   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:06.232464   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:02.118552   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:02.618759   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:03.119280   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:03.619453   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:04.119391   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:04.619016   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:05.118566   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:05.118653   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:05.157126   68995 cri.go:89] found id: ""
	I0719 19:36:05.157154   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.157163   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:05.157169   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:05.157218   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:05.190115   68995 cri.go:89] found id: ""
	I0719 19:36:05.190143   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.190153   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:05.190161   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:05.190218   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:05.222515   68995 cri.go:89] found id: ""
	I0719 19:36:05.222542   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.222550   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:05.222559   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:05.222609   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:05.264077   68995 cri.go:89] found id: ""
	I0719 19:36:05.264096   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.264103   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:05.264108   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:05.264180   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:05.305227   68995 cri.go:89] found id: ""
	I0719 19:36:05.305260   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.305271   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:05.305278   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:05.305365   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:05.346841   68995 cri.go:89] found id: ""
	I0719 19:36:05.346870   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.346883   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:05.346893   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:05.346948   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:05.386787   68995 cri.go:89] found id: ""
	I0719 19:36:05.386816   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.386826   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:05.386832   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:05.386888   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:05.425126   68995 cri.go:89] found id: ""
	I0719 19:36:05.425151   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.425161   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:05.425172   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:05.425187   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:05.476260   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:05.476291   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:05.489614   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:05.489639   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:05.616060   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:05.616082   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:05.616095   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:05.680089   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:05.680121   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:03.251437   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:05.253289   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:07.751972   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:05.483082   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:07.983293   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:08.233421   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:10.731139   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:08.219734   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:08.234371   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:08.234437   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:08.279970   68995 cri.go:89] found id: ""
	I0719 19:36:08.279998   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.280009   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:08.280016   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:08.280068   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:08.315997   68995 cri.go:89] found id: ""
	I0719 19:36:08.316026   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.316037   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:08.316044   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:08.316108   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:08.350044   68995 cri.go:89] found id: ""
	I0719 19:36:08.350073   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.350082   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:08.350089   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:08.350149   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:08.407460   68995 cri.go:89] found id: ""
	I0719 19:36:08.407482   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.407494   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:08.407501   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:08.407559   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:08.442445   68995 cri.go:89] found id: ""
	I0719 19:36:08.442476   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.442486   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:08.442493   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:08.442564   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:08.478149   68995 cri.go:89] found id: ""
	I0719 19:36:08.478178   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.478193   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:08.478200   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:08.478261   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:08.523968   68995 cri.go:89] found id: ""
	I0719 19:36:08.523996   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.524003   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:08.524009   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:08.524083   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:08.558285   68995 cri.go:89] found id: ""
	I0719 19:36:08.558312   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.558322   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:08.558332   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:08.558347   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:08.570518   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:08.570544   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:08.642837   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:08.642861   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:08.642874   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:08.716865   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:08.716905   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:08.760499   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:08.760530   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:11.312231   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:11.326565   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:11.326640   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:11.359787   68995 cri.go:89] found id: ""
	I0719 19:36:11.359813   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.359823   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:11.359829   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:11.359905   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:11.392541   68995 cri.go:89] found id: ""
	I0719 19:36:11.392565   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.392573   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:11.392578   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:11.392626   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:11.426313   68995 cri.go:89] found id: ""
	I0719 19:36:11.426340   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.426357   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:11.426363   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:11.426426   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:11.463138   68995 cri.go:89] found id: ""
	I0719 19:36:11.463164   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.463174   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:11.463181   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:11.463242   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:11.496506   68995 cri.go:89] found id: ""
	I0719 19:36:11.496531   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.496542   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:11.496549   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:11.496610   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:11.533917   68995 cri.go:89] found id: ""
	I0719 19:36:11.533947   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.533960   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:11.533969   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:11.534048   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:11.567245   68995 cri.go:89] found id: ""
	I0719 19:36:11.567268   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.567277   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:11.567283   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:11.567332   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:11.609340   68995 cri.go:89] found id: ""
	I0719 19:36:11.609365   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.609376   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:11.609386   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:11.609400   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:11.659918   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:11.659953   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:11.673338   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:11.673371   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:11.745829   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:11.745854   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:11.745868   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:11.816860   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:11.816897   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:10.251374   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:12.252231   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:09.983472   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:12.482718   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:12.732496   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:15.231796   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:14.357653   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:14.371820   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:14.371900   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:14.408447   68995 cri.go:89] found id: ""
	I0719 19:36:14.408478   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.408488   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:14.408512   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:14.408583   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:14.443253   68995 cri.go:89] found id: ""
	I0719 19:36:14.443283   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.443319   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:14.443329   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:14.443403   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:14.476705   68995 cri.go:89] found id: ""
	I0719 19:36:14.476730   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.476739   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:14.476746   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:14.476806   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:14.508251   68995 cri.go:89] found id: ""
	I0719 19:36:14.508276   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.508286   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:14.508294   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:14.508365   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:14.540822   68995 cri.go:89] found id: ""
	I0719 19:36:14.540850   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.540860   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:14.540868   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:14.540930   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:14.574264   68995 cri.go:89] found id: ""
	I0719 19:36:14.574294   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.574320   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:14.574329   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:14.574409   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:14.606520   68995 cri.go:89] found id: ""
	I0719 19:36:14.606553   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.606565   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:14.606573   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:14.606646   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:14.640693   68995 cri.go:89] found id: ""
	I0719 19:36:14.640722   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.640731   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:14.640743   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:14.640757   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:14.697534   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:14.697575   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:14.710323   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:14.710365   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:14.781478   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:14.781501   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:14.781516   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:14.887498   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:14.887535   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:14.750464   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:16.751420   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:14.483133   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:16.982943   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:17.732205   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:19.733051   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:17.427726   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:17.441950   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:17.442041   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:17.475370   68995 cri.go:89] found id: ""
	I0719 19:36:17.475398   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.475408   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:17.475415   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:17.475474   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:17.508258   68995 cri.go:89] found id: ""
	I0719 19:36:17.508289   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.508307   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:17.508315   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:17.508380   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:17.542625   68995 cri.go:89] found id: ""
	I0719 19:36:17.542656   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.542668   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:17.542676   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:17.542750   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:17.583168   68995 cri.go:89] found id: ""
	I0719 19:36:17.583199   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.583210   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:17.583218   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:17.583280   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:17.619339   68995 cri.go:89] found id: ""
	I0719 19:36:17.619369   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.619380   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:17.619386   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:17.619438   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:17.653194   68995 cri.go:89] found id: ""
	I0719 19:36:17.653219   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.653227   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:17.653233   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:17.653289   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:17.689086   68995 cri.go:89] found id: ""
	I0719 19:36:17.689115   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.689123   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:17.689129   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:17.689245   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:17.723079   68995 cri.go:89] found id: ""
	I0719 19:36:17.723108   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.723115   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:17.723124   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:17.723148   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:17.736805   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:17.736829   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:17.808197   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:17.808219   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:17.808251   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:17.893127   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:17.893173   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:17.936617   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:17.936653   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:20.489347   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:20.502806   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:20.502884   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:20.543791   68995 cri.go:89] found id: ""
	I0719 19:36:20.543822   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.543846   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:20.543855   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:20.543924   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:20.582638   68995 cri.go:89] found id: ""
	I0719 19:36:20.582666   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.582678   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:20.582685   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:20.582742   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:20.632150   68995 cri.go:89] found id: ""
	I0719 19:36:20.632179   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.632192   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:20.632200   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:20.632265   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:20.665934   68995 cri.go:89] found id: ""
	I0719 19:36:20.665963   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.665974   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:20.665981   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:20.666040   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:20.703637   68995 cri.go:89] found id: ""
	I0719 19:36:20.703668   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.703679   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:20.703686   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:20.703755   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:20.736467   68995 cri.go:89] found id: ""
	I0719 19:36:20.736496   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.736506   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:20.736514   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:20.736582   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:20.771313   68995 cri.go:89] found id: ""
	I0719 19:36:20.771337   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.771345   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:20.771351   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:20.771408   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:20.804915   68995 cri.go:89] found id: ""
	I0719 19:36:20.804938   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.804944   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:20.804953   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:20.804964   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:20.880189   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:20.880229   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:20.919653   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:20.919682   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:20.968588   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:20.968626   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:20.983781   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:20.983810   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:21.050261   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:18.752612   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:21.251202   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:19.483771   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:21.983390   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:22.232680   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:24.731208   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:23.550679   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:23.563483   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:23.563541   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:23.603919   68995 cri.go:89] found id: ""
	I0719 19:36:23.603950   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.603961   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:23.603969   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:23.604034   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:23.640354   68995 cri.go:89] found id: ""
	I0719 19:36:23.640377   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.640384   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:23.640390   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:23.640441   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:23.672609   68995 cri.go:89] found id: ""
	I0719 19:36:23.672644   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.672661   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:23.672668   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:23.672732   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:23.709547   68995 cri.go:89] found id: ""
	I0719 19:36:23.709577   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.709591   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:23.709598   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:23.709658   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:23.742474   68995 cri.go:89] found id: ""
	I0719 19:36:23.742498   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.742508   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:23.742514   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:23.742578   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:23.779339   68995 cri.go:89] found id: ""
	I0719 19:36:23.779372   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.779385   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:23.779393   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:23.779462   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:23.811984   68995 cri.go:89] found id: ""
	I0719 19:36:23.812010   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.812022   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:23.812031   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:23.812097   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:23.844692   68995 cri.go:89] found id: ""
	I0719 19:36:23.844716   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.844724   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:23.844733   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:23.844746   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:23.899688   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:23.899721   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:23.915771   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:23.915799   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:23.986509   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:23.986529   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:23.986543   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:24.065621   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:24.065653   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:26.602809   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:26.615541   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:26.615616   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:26.647648   68995 cri.go:89] found id: ""
	I0719 19:36:26.647671   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.647680   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:26.647685   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:26.647742   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:26.682383   68995 cri.go:89] found id: ""
	I0719 19:36:26.682409   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.682417   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:26.682423   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:26.682468   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:26.715374   68995 cri.go:89] found id: ""
	I0719 19:36:26.715398   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.715405   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:26.715411   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:26.715482   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:26.748636   68995 cri.go:89] found id: ""
	I0719 19:36:26.748663   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.748673   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:26.748680   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:26.748742   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:26.783186   68995 cri.go:89] found id: ""
	I0719 19:36:26.783218   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.783229   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:26.783236   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:26.783299   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:26.813344   68995 cri.go:89] found id: ""
	I0719 19:36:26.813370   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.813380   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:26.813386   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:26.813434   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:26.845054   68995 cri.go:89] found id: ""
	I0719 19:36:26.845083   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.845093   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:26.845100   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:26.845166   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:26.878589   68995 cri.go:89] found id: ""
	I0719 19:36:26.878614   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.878621   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:26.878629   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:26.878640   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:26.931251   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:26.931284   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:26.944193   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:26.944230   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 19:36:23.252710   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:25.257011   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:27.751677   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:24.482571   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:26.982654   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:26.732883   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:29.233481   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	W0719 19:36:27.011803   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:27.011826   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:27.011846   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:27.085331   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:27.085364   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:29.625633   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:29.639941   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:29.640002   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:29.673642   68995 cri.go:89] found id: ""
	I0719 19:36:29.673669   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.673678   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:29.673683   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:29.673744   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:29.710482   68995 cri.go:89] found id: ""
	I0719 19:36:29.710507   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.710515   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:29.710521   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:29.710571   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:29.746719   68995 cri.go:89] found id: ""
	I0719 19:36:29.746744   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.746754   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:29.746761   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:29.746823   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:29.783230   68995 cri.go:89] found id: ""
	I0719 19:36:29.783256   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.783264   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:29.783269   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:29.783318   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:29.817944   68995 cri.go:89] found id: ""
	I0719 19:36:29.817971   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.817980   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:29.817986   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:29.818049   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:29.851277   68995 cri.go:89] found id: ""
	I0719 19:36:29.851305   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.851315   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:29.851328   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:29.851387   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:29.884476   68995 cri.go:89] found id: ""
	I0719 19:36:29.884502   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.884510   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:29.884516   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:29.884566   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:29.916564   68995 cri.go:89] found id: ""
	I0719 19:36:29.916592   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.916600   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:29.916608   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:29.916620   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:29.967970   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:29.968009   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:29.981767   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:29.981792   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:30.046914   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:30.046939   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:30.046953   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:30.125610   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:30.125644   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:29.753033   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:32.251239   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:29.483807   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:31.982677   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:31.732069   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:34.231495   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:36.232914   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:32.661765   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:32.676274   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:32.676354   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:32.712439   68995 cri.go:89] found id: ""
	I0719 19:36:32.712460   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.712468   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:32.712474   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:32.712534   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:32.745933   68995 cri.go:89] found id: ""
	I0719 19:36:32.745956   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.745965   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:32.745972   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:32.746035   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:32.778900   68995 cri.go:89] found id: ""
	I0719 19:36:32.778920   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.778927   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:32.778932   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:32.778988   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:32.809927   68995 cri.go:89] found id: ""
	I0719 19:36:32.809956   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.809966   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:32.809979   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:32.810038   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:32.840388   68995 cri.go:89] found id: ""
	I0719 19:36:32.840409   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.840417   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:32.840422   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:32.840472   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:32.873390   68995 cri.go:89] found id: ""
	I0719 19:36:32.873419   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.873428   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:32.873436   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:32.873499   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:32.909278   68995 cri.go:89] found id: ""
	I0719 19:36:32.909304   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.909311   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:32.909317   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:32.909374   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:32.941310   68995 cri.go:89] found id: ""
	I0719 19:36:32.941339   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.941346   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:32.941358   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:32.941370   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:32.992848   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:32.992875   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:33.006272   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:33.006305   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:33.068323   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:33.068358   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:33.068372   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:33.148030   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:33.148065   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:35.684957   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:35.697753   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:35.697823   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:35.733470   68995 cri.go:89] found id: ""
	I0719 19:36:35.733492   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.733500   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:35.733508   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:35.733565   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:35.766009   68995 cri.go:89] found id: ""
	I0719 19:36:35.766029   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.766037   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:35.766047   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:35.766104   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:35.799205   68995 cri.go:89] found id: ""
	I0719 19:36:35.799228   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.799238   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:35.799245   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:35.799303   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:35.834102   68995 cri.go:89] found id: ""
	I0719 19:36:35.834129   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.834139   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:35.834147   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:35.834217   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:35.867545   68995 cri.go:89] found id: ""
	I0719 19:36:35.867578   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.867588   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:35.867597   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:35.867653   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:35.902804   68995 cri.go:89] found id: ""
	I0719 19:36:35.902833   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.902856   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:35.902862   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:35.902917   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:35.935798   68995 cri.go:89] found id: ""
	I0719 19:36:35.935826   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.935851   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:35.935858   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:35.935918   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:35.968992   68995 cri.go:89] found id: ""
	I0719 19:36:35.969026   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.969037   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:35.969049   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:35.969068   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:36.054017   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:36.054050   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:36.094850   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:36.094875   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:36.145996   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:36.146030   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:36.159520   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:36.159549   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:36.227943   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:34.751556   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:36.754350   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:34.483433   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:36.982914   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:38.732952   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:41.231720   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:38.728958   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:38.742662   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:38.742727   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:38.779815   68995 cri.go:89] found id: ""
	I0719 19:36:38.779856   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.779867   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:38.779874   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:38.779938   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:38.814393   68995 cri.go:89] found id: ""
	I0719 19:36:38.814418   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.814428   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:38.814435   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:38.814496   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:38.848480   68995 cri.go:89] found id: ""
	I0719 19:36:38.848501   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.848508   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:38.848513   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:38.848565   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:38.883737   68995 cri.go:89] found id: ""
	I0719 19:36:38.883787   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.883796   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:38.883802   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:38.883887   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:38.921331   68995 cri.go:89] found id: ""
	I0719 19:36:38.921365   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.921376   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:38.921384   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:38.921432   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:38.957590   68995 cri.go:89] found id: ""
	I0719 19:36:38.957617   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.957625   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:38.957630   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:38.957689   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:38.992056   68995 cri.go:89] found id: ""
	I0719 19:36:38.992081   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.992091   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:38.992099   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:38.992162   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:39.026137   68995 cri.go:89] found id: ""
	I0719 19:36:39.026165   68995 logs.go:276] 0 containers: []
	W0719 19:36:39.026173   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:39.026181   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:39.026191   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:39.104975   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:39.105016   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:39.149479   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:39.149511   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:39.200003   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:39.200033   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:39.213690   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:39.213720   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:39.283772   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:41.784814   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:41.798307   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:41.798388   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:41.835130   68995 cri.go:89] found id: ""
	I0719 19:36:41.835160   68995 logs.go:276] 0 containers: []
	W0719 19:36:41.835181   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:41.835190   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:41.835248   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:41.871655   68995 cri.go:89] found id: ""
	I0719 19:36:41.871686   68995 logs.go:276] 0 containers: []
	W0719 19:36:41.871696   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:41.871702   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:41.871748   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:41.905291   68995 cri.go:89] found id: ""
	I0719 19:36:41.905321   68995 logs.go:276] 0 containers: []
	W0719 19:36:41.905332   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:41.905338   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:41.905387   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:41.938786   68995 cri.go:89] found id: ""
	I0719 19:36:41.938820   68995 logs.go:276] 0 containers: []
	W0719 19:36:41.938831   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:41.938839   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:41.938900   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:41.973146   68995 cri.go:89] found id: ""
	I0719 19:36:41.973170   68995 logs.go:276] 0 containers: []
	W0719 19:36:41.973178   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:41.973183   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:41.973230   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:39.251794   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:41.251895   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:39.483191   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:41.983709   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:43.233443   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:45.731442   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:42.007654   68995 cri.go:89] found id: ""
	I0719 19:36:42.007692   68995 logs.go:276] 0 containers: []
	W0719 19:36:42.007703   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:42.007713   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:42.007771   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:42.043419   68995 cri.go:89] found id: ""
	I0719 19:36:42.043454   68995 logs.go:276] 0 containers: []
	W0719 19:36:42.043465   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:42.043472   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:42.043550   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:42.079058   68995 cri.go:89] found id: ""
	I0719 19:36:42.079082   68995 logs.go:276] 0 containers: []
	W0719 19:36:42.079093   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:42.079106   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:42.079119   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:42.133699   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:42.133731   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:42.149143   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:42.149178   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:42.219380   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:42.219476   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:42.219493   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:42.300052   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:42.300089   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:44.837865   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:44.851088   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:44.851160   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:44.887593   68995 cri.go:89] found id: ""
	I0719 19:36:44.887625   68995 logs.go:276] 0 containers: []
	W0719 19:36:44.887636   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:44.887644   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:44.887704   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:44.923487   68995 cri.go:89] found id: ""
	I0719 19:36:44.923518   68995 logs.go:276] 0 containers: []
	W0719 19:36:44.923528   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:44.923535   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:44.923588   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:44.958530   68995 cri.go:89] found id: ""
	I0719 19:36:44.958552   68995 logs.go:276] 0 containers: []
	W0719 19:36:44.958560   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:44.958565   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:44.958612   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:44.993148   68995 cri.go:89] found id: ""
	I0719 19:36:44.993174   68995 logs.go:276] 0 containers: []
	W0719 19:36:44.993184   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:44.993191   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:44.993261   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:45.024580   68995 cri.go:89] found id: ""
	I0719 19:36:45.024608   68995 logs.go:276] 0 containers: []
	W0719 19:36:45.024619   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:45.024627   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:45.024687   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:45.061929   68995 cri.go:89] found id: ""
	I0719 19:36:45.061954   68995 logs.go:276] 0 containers: []
	W0719 19:36:45.061965   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:45.061974   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:45.062029   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:45.093445   68995 cri.go:89] found id: ""
	I0719 19:36:45.093473   68995 logs.go:276] 0 containers: []
	W0719 19:36:45.093482   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:45.093490   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:45.093549   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:45.129804   68995 cri.go:89] found id: ""
	I0719 19:36:45.129838   68995 logs.go:276] 0 containers: []
	W0719 19:36:45.129848   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:45.129860   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:45.129874   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:45.184333   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:45.184373   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:45.197516   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:45.197545   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:45.267374   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:45.267394   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:45.267411   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:45.343359   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:45.343400   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:43.751901   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:45.752529   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:44.482460   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:46.483629   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:48.231760   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:50.232025   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:47.880247   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:47.892644   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:47.892716   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:47.924838   68995 cri.go:89] found id: ""
	I0719 19:36:47.924866   68995 logs.go:276] 0 containers: []
	W0719 19:36:47.924874   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:47.924880   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:47.924936   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:47.956920   68995 cri.go:89] found id: ""
	I0719 19:36:47.956946   68995 logs.go:276] 0 containers: []
	W0719 19:36:47.956954   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:47.956960   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:47.957021   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:47.992589   68995 cri.go:89] found id: ""
	I0719 19:36:47.992614   68995 logs.go:276] 0 containers: []
	W0719 19:36:47.992622   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:47.992627   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:47.992681   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:48.028222   68995 cri.go:89] found id: ""
	I0719 19:36:48.028251   68995 logs.go:276] 0 containers: []
	W0719 19:36:48.028262   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:48.028270   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:48.028343   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:48.062122   68995 cri.go:89] found id: ""
	I0719 19:36:48.062144   68995 logs.go:276] 0 containers: []
	W0719 19:36:48.062152   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:48.062157   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:48.062202   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:48.095189   68995 cri.go:89] found id: ""
	I0719 19:36:48.095212   68995 logs.go:276] 0 containers: []
	W0719 19:36:48.095220   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:48.095226   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:48.095273   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:48.127726   68995 cri.go:89] found id: ""
	I0719 19:36:48.127756   68995 logs.go:276] 0 containers: []
	W0719 19:36:48.127764   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:48.127769   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:48.127820   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:48.160445   68995 cri.go:89] found id: ""
	I0719 19:36:48.160473   68995 logs.go:276] 0 containers: []
	W0719 19:36:48.160481   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:48.160490   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:48.160501   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:48.214196   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:48.214228   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:48.227942   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:48.227967   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:48.302651   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:48.302671   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:48.302684   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:48.382283   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:48.382325   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:50.921030   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:50.934783   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:50.934859   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:50.970236   68995 cri.go:89] found id: ""
	I0719 19:36:50.970268   68995 logs.go:276] 0 containers: []
	W0719 19:36:50.970282   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:50.970290   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:50.970349   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:51.003636   68995 cri.go:89] found id: ""
	I0719 19:36:51.003662   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.003669   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:51.003675   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:51.003735   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:51.035390   68995 cri.go:89] found id: ""
	I0719 19:36:51.035417   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.035425   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:51.035432   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:51.035496   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:51.069118   68995 cri.go:89] found id: ""
	I0719 19:36:51.069140   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.069148   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:51.069154   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:51.069207   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:51.102433   68995 cri.go:89] found id: ""
	I0719 19:36:51.102465   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.102476   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:51.102483   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:51.102547   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:51.135711   68995 cri.go:89] found id: ""
	I0719 19:36:51.135743   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.135755   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:51.135764   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:51.135856   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:51.168041   68995 cri.go:89] found id: ""
	I0719 19:36:51.168073   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.168083   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:51.168091   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:51.168153   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:51.205637   68995 cri.go:89] found id: ""
	I0719 19:36:51.205667   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.205674   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:51.205683   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:51.205696   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:51.245168   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:51.245197   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:51.295810   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:51.295863   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:51.308791   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:51.308813   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:51.385214   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:51.385239   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:51.385254   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:48.251909   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:50.750650   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:52.751426   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:48.983311   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:50.984588   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:52.232203   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:54.731934   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:53.959019   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:53.971044   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:53.971118   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:54.004971   68995 cri.go:89] found id: ""
	I0719 19:36:54.004996   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.005007   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:54.005014   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:54.005074   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:54.035794   68995 cri.go:89] found id: ""
	I0719 19:36:54.035820   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.035828   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:54.035849   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:54.035909   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:54.067783   68995 cri.go:89] found id: ""
	I0719 19:36:54.067811   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.067821   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:54.067828   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:54.067905   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:54.107403   68995 cri.go:89] found id: ""
	I0719 19:36:54.107424   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.107432   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:54.107437   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:54.107486   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:54.144411   68995 cri.go:89] found id: ""
	I0719 19:36:54.144434   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.144443   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:54.144448   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:54.144507   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:54.180416   68995 cri.go:89] found id: ""
	I0719 19:36:54.180444   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.180453   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:54.180459   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:54.180504   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:54.216444   68995 cri.go:89] found id: ""
	I0719 19:36:54.216476   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.216484   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:54.216489   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:54.216536   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:54.252652   68995 cri.go:89] found id: ""
	I0719 19:36:54.252672   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.252679   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:54.252686   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:54.252697   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:54.265888   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:54.265920   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:54.337271   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:54.337300   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:54.337316   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:54.414882   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:54.414916   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:54.452363   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:54.452387   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:55.252673   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:57.752884   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:53.483020   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:55.982973   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:57.983073   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:56.732297   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:58.733339   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:01.233006   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:57.004491   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:57.017581   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:57.017662   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:57.051318   68995 cri.go:89] found id: ""
	I0719 19:36:57.051347   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.051355   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:57.051362   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:57.051414   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:57.091361   68995 cri.go:89] found id: ""
	I0719 19:36:57.091386   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.091394   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:57.091399   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:57.091454   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:57.125970   68995 cri.go:89] found id: ""
	I0719 19:36:57.125998   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.126009   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:57.126016   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:57.126078   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:57.162767   68995 cri.go:89] found id: ""
	I0719 19:36:57.162794   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.162804   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:57.162811   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:57.162868   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:57.199712   68995 cri.go:89] found id: ""
	I0719 19:36:57.199742   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.199763   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:57.199770   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:57.199882   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:57.234782   68995 cri.go:89] found id: ""
	I0719 19:36:57.234812   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.234823   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:57.234830   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:57.234889   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:57.269789   68995 cri.go:89] found id: ""
	I0719 19:36:57.269817   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.269827   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:57.269835   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:57.269895   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:57.303323   68995 cri.go:89] found id: ""
	I0719 19:36:57.303350   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.303361   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:57.303371   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:57.303385   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:57.353434   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:57.353469   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:57.365982   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:57.366007   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:57.433767   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:57.433787   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:57.433801   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:57.514332   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:57.514372   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:00.053832   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:00.067665   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:00.067733   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:00.102502   68995 cri.go:89] found id: ""
	I0719 19:37:00.102529   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.102539   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:00.102547   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:00.102605   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:00.155051   68995 cri.go:89] found id: ""
	I0719 19:37:00.155088   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.155101   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:00.155109   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:00.155245   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:00.189002   68995 cri.go:89] found id: ""
	I0719 19:37:00.189032   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.189042   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:00.189049   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:00.189113   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:00.222057   68995 cri.go:89] found id: ""
	I0719 19:37:00.222083   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.222091   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:00.222097   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:00.222278   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:00.256815   68995 cri.go:89] found id: ""
	I0719 19:37:00.256839   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.256847   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:00.256853   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:00.256907   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:00.288710   68995 cri.go:89] found id: ""
	I0719 19:37:00.288742   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.288750   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:00.288756   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:00.288803   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:00.325994   68995 cri.go:89] found id: ""
	I0719 19:37:00.326019   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.326028   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:00.326034   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:00.326090   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:00.360401   68995 cri.go:89] found id: ""
	I0719 19:37:00.360434   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.360441   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:00.360450   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:00.360462   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:00.373448   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:00.373479   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:00.443796   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:00.443812   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:00.443825   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:00.529738   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:00.529777   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:00.571340   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:00.571367   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:00.251762   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:02.750698   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:59.984278   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:02.482260   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:03.731540   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:05.731921   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:03.121961   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:03.134075   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:03.134472   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:03.168000   68995 cri.go:89] found id: ""
	I0719 19:37:03.168033   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.168044   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:03.168053   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:03.168118   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:03.200331   68995 cri.go:89] found id: ""
	I0719 19:37:03.200362   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.200369   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:03.200376   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:03.200437   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:03.232472   68995 cri.go:89] found id: ""
	I0719 19:37:03.232500   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.232510   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:03.232517   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:03.232567   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:03.265173   68995 cri.go:89] found id: ""
	I0719 19:37:03.265200   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.265208   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:03.265216   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:03.265268   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:03.298983   68995 cri.go:89] found id: ""
	I0719 19:37:03.299011   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.299022   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:03.299029   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:03.299081   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:03.337990   68995 cri.go:89] found id: ""
	I0719 19:37:03.338024   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.338038   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:03.338046   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:03.338111   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:03.371374   68995 cri.go:89] found id: ""
	I0719 19:37:03.371402   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.371410   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:03.371415   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:03.371472   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:03.403890   68995 cri.go:89] found id: ""
	I0719 19:37:03.403915   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.403928   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:03.403939   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:03.403954   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:03.478826   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:03.478864   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:03.519208   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:03.519231   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:03.571603   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:03.571636   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:03.585154   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:03.585178   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:03.649156   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:06.149550   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:06.162598   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:06.162676   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:06.196635   68995 cri.go:89] found id: ""
	I0719 19:37:06.196681   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.196692   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:06.196701   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:06.196764   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:06.230855   68995 cri.go:89] found id: ""
	I0719 19:37:06.230882   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.230892   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:06.230899   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:06.230959   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:06.264715   68995 cri.go:89] found id: ""
	I0719 19:37:06.264754   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.264771   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:06.264779   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:06.264842   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:06.297725   68995 cri.go:89] found id: ""
	I0719 19:37:06.297751   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.297758   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:06.297765   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:06.297810   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:06.334943   68995 cri.go:89] found id: ""
	I0719 19:37:06.334970   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.334978   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:06.334984   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:06.335038   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:06.373681   68995 cri.go:89] found id: ""
	I0719 19:37:06.373716   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.373725   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:06.373730   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:06.373781   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:06.409528   68995 cri.go:89] found id: ""
	I0719 19:37:06.409554   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.409564   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:06.409609   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:06.409682   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:06.444636   68995 cri.go:89] found id: ""
	I0719 19:37:06.444671   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.444680   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:06.444715   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:06.444734   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:06.499678   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:06.499714   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:06.514790   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:06.514821   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:06.604576   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:06.604597   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:06.604615   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:06.690956   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:06.690989   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:04.752636   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:07.251643   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:04.484460   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:06.982794   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:08.231684   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:10.233049   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:09.228994   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:09.243444   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:09.243506   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:09.277208   68995 cri.go:89] found id: ""
	I0719 19:37:09.277229   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.277237   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:09.277247   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:09.277296   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:09.309077   68995 cri.go:89] found id: ""
	I0719 19:37:09.309102   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.309110   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:09.309115   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:09.309168   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:09.341212   68995 cri.go:89] found id: ""
	I0719 19:37:09.341247   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.341258   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:09.341265   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:09.341332   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:09.376373   68995 cri.go:89] found id: ""
	I0719 19:37:09.376400   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.376410   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:09.376416   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:09.376475   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:09.409175   68995 cri.go:89] found id: ""
	I0719 19:37:09.409204   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.409215   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:09.409222   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:09.409277   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:09.441481   68995 cri.go:89] found id: ""
	I0719 19:37:09.441509   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.441518   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:09.441526   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:09.441588   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:09.472907   68995 cri.go:89] found id: ""
	I0719 19:37:09.472935   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.472946   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:09.472954   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:09.473008   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:09.507776   68995 cri.go:89] found id: ""
	I0719 19:37:09.507801   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.507810   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:09.507818   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:09.507850   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:09.559873   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:09.559910   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:09.572650   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:09.572677   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:09.644352   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:09.644375   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:09.644387   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:09.721464   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:09.721504   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:09.252776   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:11.751354   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:08.983314   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:10.983634   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:12.732203   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:15.234561   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:12.264113   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:12.276702   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:12.276758   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:12.321359   68995 cri.go:89] found id: ""
	I0719 19:37:12.321386   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.321393   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:12.321400   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:12.321471   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:12.365402   68995 cri.go:89] found id: ""
	I0719 19:37:12.365433   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.365441   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:12.365447   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:12.365505   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:12.409431   68995 cri.go:89] found id: ""
	I0719 19:37:12.409459   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.409471   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:12.409478   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:12.409526   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:12.447628   68995 cri.go:89] found id: ""
	I0719 19:37:12.447656   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.447665   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:12.447671   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:12.447720   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:12.481760   68995 cri.go:89] found id: ""
	I0719 19:37:12.481796   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.481806   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:12.481813   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:12.481875   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:12.513430   68995 cri.go:89] found id: ""
	I0719 19:37:12.513460   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.513471   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:12.513477   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:12.513525   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:12.546140   68995 cri.go:89] found id: ""
	I0719 19:37:12.546171   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.546182   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:12.546190   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:12.546258   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:12.578741   68995 cri.go:89] found id: ""
	I0719 19:37:12.578764   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.578774   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:12.578785   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:12.578800   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:12.647584   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:12.647610   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:12.647625   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:12.721989   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:12.722027   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:12.761141   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:12.761168   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:12.811283   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:12.811334   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:15.327447   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:15.339987   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:15.340053   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:15.373692   68995 cri.go:89] found id: ""
	I0719 19:37:15.373720   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.373729   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:15.373737   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:15.373794   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:15.405801   68995 cri.go:89] found id: ""
	I0719 19:37:15.405831   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.405842   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:15.405849   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:15.405915   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:15.439395   68995 cri.go:89] found id: ""
	I0719 19:37:15.439424   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.439436   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:15.439444   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:15.439513   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:15.475944   68995 cri.go:89] found id: ""
	I0719 19:37:15.475967   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.475975   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:15.475981   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:15.476029   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:15.509593   68995 cri.go:89] found id: ""
	I0719 19:37:15.509613   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.509627   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:15.509634   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:15.509695   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:15.548840   68995 cri.go:89] found id: ""
	I0719 19:37:15.548871   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.548881   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:15.548887   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:15.548948   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:15.581096   68995 cri.go:89] found id: ""
	I0719 19:37:15.581139   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.581155   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:15.581163   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:15.581223   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:15.617428   68995 cri.go:89] found id: ""
	I0719 19:37:15.617458   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.617469   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:15.617479   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:15.617495   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:15.688196   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:15.688223   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:15.688239   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:15.770179   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:15.770216   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:15.807899   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:15.807924   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:15.858152   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:15.858186   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:13.751672   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:16.251465   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:13.482766   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:15.485709   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:17.982675   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:17.731547   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:19.732093   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:18.370857   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:18.383710   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:18.383781   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:18.417000   68995 cri.go:89] found id: ""
	I0719 19:37:18.417032   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.417044   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:18.417051   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:18.417149   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:18.455852   68995 cri.go:89] found id: ""
	I0719 19:37:18.455884   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.455900   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:18.455907   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:18.455970   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:18.491458   68995 cri.go:89] found id: ""
	I0719 19:37:18.491490   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.491501   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:18.491508   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:18.491573   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:18.525125   68995 cri.go:89] found id: ""
	I0719 19:37:18.525151   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.525158   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:18.525163   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:18.525210   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:18.560527   68995 cri.go:89] found id: ""
	I0719 19:37:18.560555   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.560566   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:18.560574   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:18.560624   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:18.595328   68995 cri.go:89] found id: ""
	I0719 19:37:18.595365   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.595378   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:18.595388   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:18.595445   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:18.627669   68995 cri.go:89] found id: ""
	I0719 19:37:18.627697   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.627705   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:18.627711   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:18.627765   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:18.660140   68995 cri.go:89] found id: ""
	I0719 19:37:18.660168   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.660177   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:18.660191   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:18.660206   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:18.733599   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:18.733628   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:18.774893   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:18.774920   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:18.827635   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:18.827672   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:18.841519   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:18.841545   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:18.904744   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:21.405683   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:21.420145   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:21.420224   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:21.456434   68995 cri.go:89] found id: ""
	I0719 19:37:21.456465   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.456477   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:21.456485   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:21.456543   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:21.489839   68995 cri.go:89] found id: ""
	I0719 19:37:21.489867   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.489874   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:21.489893   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:21.489945   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:21.521492   68995 cri.go:89] found id: ""
	I0719 19:37:21.521523   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.521533   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:21.521540   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:21.521602   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:21.553974   68995 cri.go:89] found id: ""
	I0719 19:37:21.553998   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.554008   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:21.554015   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:21.554075   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:21.589000   68995 cri.go:89] found id: ""
	I0719 19:37:21.589030   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.589042   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:21.589050   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:21.589119   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:21.622617   68995 cri.go:89] found id: ""
	I0719 19:37:21.622640   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.622648   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:21.622653   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:21.622702   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:21.654149   68995 cri.go:89] found id: ""
	I0719 19:37:21.654171   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.654178   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:21.654183   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:21.654241   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:21.687587   68995 cri.go:89] found id: ""
	I0719 19:37:21.687616   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.687628   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:21.687640   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:21.687656   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:21.699883   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:21.699910   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:21.769804   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:21.769825   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:21.769839   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:21.857167   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:21.857205   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:21.896025   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:21.896050   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:18.251893   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:20.751093   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:19.989216   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:22.483346   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:22.231129   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:24.231620   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:26.232522   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:24.444695   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:24.457239   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:24.457310   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:24.491968   68995 cri.go:89] found id: ""
	I0719 19:37:24.491990   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.491999   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:24.492003   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:24.492049   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:24.526039   68995 cri.go:89] found id: ""
	I0719 19:37:24.526061   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.526069   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:24.526074   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:24.526120   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:24.560153   68995 cri.go:89] found id: ""
	I0719 19:37:24.560184   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.560194   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:24.560201   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:24.560254   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:24.594331   68995 cri.go:89] found id: ""
	I0719 19:37:24.594359   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.594369   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:24.594375   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:24.594442   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:24.632389   68995 cri.go:89] found id: ""
	I0719 19:37:24.632416   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.632425   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:24.632430   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:24.632481   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:24.664448   68995 cri.go:89] found id: ""
	I0719 19:37:24.664473   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.664481   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:24.664487   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:24.664547   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:24.699510   68995 cri.go:89] found id: ""
	I0719 19:37:24.699533   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.699542   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:24.699547   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:24.699599   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:24.732624   68995 cri.go:89] found id: ""
	I0719 19:37:24.732646   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.732655   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:24.732665   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:24.732688   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:24.800136   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:24.800161   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:24.800176   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:24.883652   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:24.883696   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:24.922027   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:24.922052   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:24.975191   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:24.975235   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:23.251310   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:25.251598   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:27.751985   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:24.484050   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:26.982359   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:28.237361   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:30.734572   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:27.489307   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:27.503441   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:27.503500   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:27.536099   68995 cri.go:89] found id: ""
	I0719 19:37:27.536126   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.536134   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:27.536140   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:27.536187   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:27.567032   68995 cri.go:89] found id: ""
	I0719 19:37:27.567060   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.567068   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:27.567073   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:27.567133   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:27.599396   68995 cri.go:89] found id: ""
	I0719 19:37:27.599425   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.599436   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:27.599444   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:27.599510   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:27.634718   68995 cri.go:89] found id: ""
	I0719 19:37:27.634746   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.634755   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:27.634760   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:27.634809   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:27.672731   68995 cri.go:89] found id: ""
	I0719 19:37:27.672766   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.672778   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:27.672787   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:27.672852   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:27.706705   68995 cri.go:89] found id: ""
	I0719 19:37:27.706734   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.706743   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:27.706751   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:27.706814   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:27.740520   68995 cri.go:89] found id: ""
	I0719 19:37:27.740548   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.740556   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:27.740561   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:27.740614   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:27.772629   68995 cri.go:89] found id: ""
	I0719 19:37:27.772653   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.772660   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:27.772668   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:27.772680   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:27.826404   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:27.826432   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:27.839299   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:27.839331   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:27.903888   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:27.903908   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:27.903920   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:27.981384   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:27.981418   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:30.519503   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:30.532781   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:30.532867   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:30.566867   68995 cri.go:89] found id: ""
	I0719 19:37:30.566900   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.566911   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:30.566918   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:30.566979   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:30.599130   68995 cri.go:89] found id: ""
	I0719 19:37:30.599160   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.599171   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:30.599178   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:30.599241   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:30.632636   68995 cri.go:89] found id: ""
	I0719 19:37:30.632672   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.632685   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:30.632694   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:30.632767   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:30.665793   68995 cri.go:89] found id: ""
	I0719 19:37:30.665822   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.665837   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:30.665845   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:30.665908   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:30.699002   68995 cri.go:89] found id: ""
	I0719 19:37:30.699027   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.699034   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:30.699039   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:30.699098   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:30.733654   68995 cri.go:89] found id: ""
	I0719 19:37:30.733680   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.733690   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:30.733698   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:30.733757   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:30.766219   68995 cri.go:89] found id: ""
	I0719 19:37:30.766250   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.766260   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:30.766269   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:30.766344   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:30.800884   68995 cri.go:89] found id: ""
	I0719 19:37:30.800912   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.800922   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:30.800931   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:30.800946   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:30.851079   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:30.851114   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:30.863829   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:30.863873   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:30.932194   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:30.932219   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:30.932231   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:31.012345   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:31.012377   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:30.250950   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:32.251737   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:28.982804   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:30.983656   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:32.736607   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:35.232724   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:33.549156   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:33.561253   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:33.561318   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:33.593443   68995 cri.go:89] found id: ""
	I0719 19:37:33.593469   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.593479   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:33.593487   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:33.593545   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:33.626350   68995 cri.go:89] found id: ""
	I0719 19:37:33.626371   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.626380   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:33.626385   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:33.626434   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:33.660751   68995 cri.go:89] found id: ""
	I0719 19:37:33.660789   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.660797   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:33.660803   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:33.660857   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:33.696342   68995 cri.go:89] found id: ""
	I0719 19:37:33.696380   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.696392   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:33.696399   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:33.696465   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:33.728815   68995 cri.go:89] found id: ""
	I0719 19:37:33.728841   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.728850   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:33.728860   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:33.728922   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:33.765055   68995 cri.go:89] found id: ""
	I0719 19:37:33.765088   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.765095   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:33.765101   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:33.765150   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:33.798600   68995 cri.go:89] found id: ""
	I0719 19:37:33.798627   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.798635   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:33.798640   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:33.798687   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:33.831037   68995 cri.go:89] found id: ""
	I0719 19:37:33.831065   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.831075   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:33.831085   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:33.831100   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:33.880454   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:33.880485   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:33.894966   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:33.894996   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:33.964170   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:33.964200   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:33.964215   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:34.046534   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:34.046565   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:36.587779   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:36.601141   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:36.601211   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:36.634499   68995 cri.go:89] found id: ""
	I0719 19:37:36.634527   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.634536   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:36.634542   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:36.634606   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:36.666890   68995 cri.go:89] found id: ""
	I0719 19:37:36.666917   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.666925   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:36.666931   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:36.666986   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:36.698970   68995 cri.go:89] found id: ""
	I0719 19:37:36.699001   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.699016   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:36.699023   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:36.699078   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:36.731334   68995 cri.go:89] found id: ""
	I0719 19:37:36.731368   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.731379   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:36.731386   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:36.731446   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:36.764358   68995 cri.go:89] found id: ""
	I0719 19:37:36.764382   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.764394   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:36.764400   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:36.764455   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:36.798707   68995 cri.go:89] found id: ""
	I0719 19:37:36.798734   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.798742   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:36.798747   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:36.798797   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:36.831226   68995 cri.go:89] found id: ""
	I0719 19:37:36.831255   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.831266   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:36.831273   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:36.831331   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:36.863480   68995 cri.go:89] found id: ""
	I0719 19:37:36.863506   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.863513   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:36.863522   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:36.863536   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:36.915869   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:36.915908   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:36.930750   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:36.930781   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 19:37:34.750463   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:36.751189   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:33.484028   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:35.984592   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:37.233215   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:39.732096   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	W0719 19:37:37.012674   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:37.012694   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:37.012708   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:37.091337   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:37.091375   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:39.630016   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:39.642979   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:39.643041   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:39.675984   68995 cri.go:89] found id: ""
	I0719 19:37:39.676013   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.676023   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:39.676031   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:39.676100   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:39.707035   68995 cri.go:89] found id: ""
	I0719 19:37:39.707061   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.707068   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:39.707073   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:39.707131   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:39.744471   68995 cri.go:89] found id: ""
	I0719 19:37:39.744496   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.744503   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:39.744508   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:39.744561   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:39.778135   68995 cri.go:89] found id: ""
	I0719 19:37:39.778162   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.778171   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:39.778179   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:39.778238   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:39.811690   68995 cri.go:89] found id: ""
	I0719 19:37:39.811717   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.811727   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:39.811735   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:39.811793   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:39.843697   68995 cri.go:89] found id: ""
	I0719 19:37:39.843728   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.843740   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:39.843751   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:39.843801   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:39.877406   68995 cri.go:89] found id: ""
	I0719 19:37:39.877436   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.877448   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:39.877455   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:39.877520   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:39.909759   68995 cri.go:89] found id: ""
	I0719 19:37:39.909783   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.909790   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:39.909799   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:39.909809   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:39.958364   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:39.958393   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:39.973472   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:39.973512   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:40.040299   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:40.040323   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:40.040338   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:40.117663   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:40.117702   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:38.751436   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:40.751476   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:42.752002   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:38.483729   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:40.983938   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:42.232962   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:44.732151   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:42.657713   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:42.672030   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:42.672096   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:42.706163   68995 cri.go:89] found id: ""
	I0719 19:37:42.706193   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.706204   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:42.706212   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:42.706271   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:42.740026   68995 cri.go:89] found id: ""
	I0719 19:37:42.740053   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.740063   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:42.740070   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:42.740132   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:42.772413   68995 cri.go:89] found id: ""
	I0719 19:37:42.772436   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.772443   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:42.772448   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:42.772493   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:42.803898   68995 cri.go:89] found id: ""
	I0719 19:37:42.803925   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.803936   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:42.803943   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:42.804007   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:42.835062   68995 cri.go:89] found id: ""
	I0719 19:37:42.835091   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.835100   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:42.835108   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:42.835181   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:42.867382   68995 cri.go:89] found id: ""
	I0719 19:37:42.867411   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.867420   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:42.867427   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:42.867513   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:42.899461   68995 cri.go:89] found id: ""
	I0719 19:37:42.899483   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.899491   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:42.899496   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:42.899542   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:42.932789   68995 cri.go:89] found id: ""
	I0719 19:37:42.932818   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.932827   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:42.932837   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:42.932854   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:42.945144   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:42.945174   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:43.013369   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:43.013393   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:43.013408   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:43.093227   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:43.093263   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:43.134429   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:43.134454   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:45.684772   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:45.697903   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:45.697958   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:45.731984   68995 cri.go:89] found id: ""
	I0719 19:37:45.732007   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.732015   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:45.732021   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:45.732080   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:45.767631   68995 cri.go:89] found id: ""
	I0719 19:37:45.767660   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.767670   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:45.767678   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:45.767739   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:45.800822   68995 cri.go:89] found id: ""
	I0719 19:37:45.800849   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.800860   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:45.800871   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:45.800928   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:45.834313   68995 cri.go:89] found id: ""
	I0719 19:37:45.834345   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.834354   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:45.834360   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:45.834417   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:45.868659   68995 cri.go:89] found id: ""
	I0719 19:37:45.868690   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.868701   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:45.868709   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:45.868780   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:45.900779   68995 cri.go:89] found id: ""
	I0719 19:37:45.900806   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.900816   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:45.900824   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:45.900886   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:45.932767   68995 cri.go:89] found id: ""
	I0719 19:37:45.932796   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.932806   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:45.932811   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:45.932872   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:45.967785   68995 cri.go:89] found id: ""
	I0719 19:37:45.967814   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.967822   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:45.967839   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:45.967852   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:46.034598   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:46.034623   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:46.034642   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:46.124237   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:46.124286   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:46.164874   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:46.164904   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:46.216679   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:46.216713   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:45.251369   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:47.251447   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:43.483160   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:45.983286   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:47.983452   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:46.732579   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:49.232449   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:51.232677   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:48.730444   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:48.743209   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:48.743283   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:48.780940   68995 cri.go:89] found id: ""
	I0719 19:37:48.780974   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.780982   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:48.780987   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:48.781037   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:48.814186   68995 cri.go:89] found id: ""
	I0719 19:37:48.814213   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.814221   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:48.814230   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:48.814310   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:48.845013   68995 cri.go:89] found id: ""
	I0719 19:37:48.845042   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.845056   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:48.845065   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:48.845139   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:48.878777   68995 cri.go:89] found id: ""
	I0719 19:37:48.878799   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.878807   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:48.878812   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:48.878858   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:48.908926   68995 cri.go:89] found id: ""
	I0719 19:37:48.908954   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.908965   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:48.908972   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:48.909032   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:48.942442   68995 cri.go:89] found id: ""
	I0719 19:37:48.942473   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.942483   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:48.942491   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:48.942554   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:48.974213   68995 cri.go:89] found id: ""
	I0719 19:37:48.974240   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.974249   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:48.974254   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:48.974308   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:49.010041   68995 cri.go:89] found id: ""
	I0719 19:37:49.010067   68995 logs.go:276] 0 containers: []
	W0719 19:37:49.010076   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:49.010085   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:49.010098   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:49.060387   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:49.060419   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:49.073748   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:49.073773   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:49.139397   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:49.139422   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:49.139439   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:49.216748   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:49.216792   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:51.759657   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:51.771457   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:51.771538   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:51.804841   68995 cri.go:89] found id: ""
	I0719 19:37:51.804870   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.804881   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:51.804886   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:51.804937   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:51.838235   68995 cri.go:89] found id: ""
	I0719 19:37:51.838264   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.838274   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:51.838282   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:51.838363   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:51.885664   68995 cri.go:89] found id: ""
	I0719 19:37:51.885689   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.885699   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:51.885706   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:51.885769   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:51.917883   68995 cri.go:89] found id: ""
	I0719 19:37:51.917910   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.917920   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:51.917927   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:51.917990   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:51.949856   68995 cri.go:89] found id: ""
	I0719 19:37:51.949889   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.949898   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:51.949905   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:51.949965   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:51.984016   68995 cri.go:89] found id: ""
	I0719 19:37:51.984060   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.984072   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:51.984081   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:51.984134   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:49.752557   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:52.251462   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:50.483240   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:52.484284   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:53.731744   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:55.733040   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:52.020853   68995 cri.go:89] found id: ""
	I0719 19:37:52.020883   68995 logs.go:276] 0 containers: []
	W0719 19:37:52.020895   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:52.020903   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:52.020958   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:52.054285   68995 cri.go:89] found id: ""
	I0719 19:37:52.054310   68995 logs.go:276] 0 containers: []
	W0719 19:37:52.054321   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:52.054329   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:52.054341   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:52.091083   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:52.091108   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:52.143946   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:52.143981   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:52.158573   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:52.158600   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:52.232029   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:52.232052   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:52.232065   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:54.811805   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:54.825343   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:54.825404   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:54.859915   68995 cri.go:89] found id: ""
	I0719 19:37:54.859943   68995 logs.go:276] 0 containers: []
	W0719 19:37:54.859953   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:54.859961   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:54.860022   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:54.892199   68995 cri.go:89] found id: ""
	I0719 19:37:54.892234   68995 logs.go:276] 0 containers: []
	W0719 19:37:54.892245   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:54.892252   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:54.892316   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:54.925714   68995 cri.go:89] found id: ""
	I0719 19:37:54.925746   68995 logs.go:276] 0 containers: []
	W0719 19:37:54.925757   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:54.925765   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:54.925833   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:54.958677   68995 cri.go:89] found id: ""
	I0719 19:37:54.958717   68995 logs.go:276] 0 containers: []
	W0719 19:37:54.958729   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:54.958736   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:54.958803   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:54.994945   68995 cri.go:89] found id: ""
	I0719 19:37:54.994976   68995 logs.go:276] 0 containers: []
	W0719 19:37:54.994986   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:54.994994   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:54.995073   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:55.028149   68995 cri.go:89] found id: ""
	I0719 19:37:55.028175   68995 logs.go:276] 0 containers: []
	W0719 19:37:55.028185   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:55.028194   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:55.028255   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:55.062698   68995 cri.go:89] found id: ""
	I0719 19:37:55.062731   68995 logs.go:276] 0 containers: []
	W0719 19:37:55.062744   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:55.062753   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:55.062812   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:55.095864   68995 cri.go:89] found id: ""
	I0719 19:37:55.095892   68995 logs.go:276] 0 containers: []
	W0719 19:37:55.095903   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:55.095913   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:55.095928   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:55.152201   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:55.152235   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:55.165503   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:55.165528   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:55.232405   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:55.232429   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:55.232444   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:55.315444   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:55.315496   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:54.752510   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:57.251379   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:54.984369   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:57.482226   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:58.231419   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:00.733424   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:57.852984   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:57.865718   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:57.865784   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:57.898531   68995 cri.go:89] found id: ""
	I0719 19:37:57.898557   68995 logs.go:276] 0 containers: []
	W0719 19:37:57.898565   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:57.898570   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:57.898625   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:57.933746   68995 cri.go:89] found id: ""
	I0719 19:37:57.933774   68995 logs.go:276] 0 containers: []
	W0719 19:37:57.933784   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:57.933791   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:57.933851   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:57.974429   68995 cri.go:89] found id: ""
	I0719 19:37:57.974466   68995 logs.go:276] 0 containers: []
	W0719 19:37:57.974479   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:57.974488   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:57.974560   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:58.011209   68995 cri.go:89] found id: ""
	I0719 19:37:58.011233   68995 logs.go:276] 0 containers: []
	W0719 19:37:58.011243   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:58.011249   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:58.011313   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:58.049309   68995 cri.go:89] found id: ""
	I0719 19:37:58.049337   68995 logs.go:276] 0 containers: []
	W0719 19:37:58.049349   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:58.049357   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:58.049419   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:58.080873   68995 cri.go:89] found id: ""
	I0719 19:37:58.080902   68995 logs.go:276] 0 containers: []
	W0719 19:37:58.080913   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:58.080920   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:58.080979   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:58.113661   68995 cri.go:89] found id: ""
	I0719 19:37:58.113689   68995 logs.go:276] 0 containers: []
	W0719 19:37:58.113698   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:58.113706   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:58.113845   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:58.147431   68995 cri.go:89] found id: ""
	I0719 19:37:58.147459   68995 logs.go:276] 0 containers: []
	W0719 19:37:58.147469   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:58.147480   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:58.147495   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:58.160170   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:58.160195   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:58.222818   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:58.222858   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:58.222874   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:58.304895   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:58.304929   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:58.354462   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:58.354492   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:00.905865   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:00.918105   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:00.918180   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:00.952085   68995 cri.go:89] found id: ""
	I0719 19:38:00.952111   68995 logs.go:276] 0 containers: []
	W0719 19:38:00.952122   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:00.952130   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:00.952192   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:00.985966   68995 cri.go:89] found id: ""
	I0719 19:38:00.985990   68995 logs.go:276] 0 containers: []
	W0719 19:38:00.985999   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:00.986011   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:00.986059   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:01.021869   68995 cri.go:89] found id: ""
	I0719 19:38:01.021901   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.021910   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:01.021915   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:01.021961   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:01.055821   68995 cri.go:89] found id: ""
	I0719 19:38:01.055867   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.055878   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:01.055886   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:01.055950   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:01.090922   68995 cri.go:89] found id: ""
	I0719 19:38:01.090945   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.090952   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:01.090958   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:01.091007   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:01.122779   68995 cri.go:89] found id: ""
	I0719 19:38:01.122808   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.122824   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:01.122831   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:01.122896   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:01.159436   68995 cri.go:89] found id: ""
	I0719 19:38:01.159464   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.159473   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:01.159479   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:01.159531   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:01.191500   68995 cri.go:89] found id: ""
	I0719 19:38:01.191524   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.191532   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:01.191541   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:01.191551   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:01.227143   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:01.227173   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:01.278336   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:01.278363   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:01.291658   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:01.291687   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:01.357320   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:01.357346   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:01.357362   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:59.251599   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:01.252565   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:59.482653   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:01.483865   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:03.232078   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:05.731641   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:03.938679   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:03.951235   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:03.951295   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:03.987844   68995 cri.go:89] found id: ""
	I0719 19:38:03.987875   68995 logs.go:276] 0 containers: []
	W0719 19:38:03.987887   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:03.987894   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:03.987959   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:04.030628   68995 cri.go:89] found id: ""
	I0719 19:38:04.030654   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.030665   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:04.030672   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:04.030744   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:04.077913   68995 cri.go:89] found id: ""
	I0719 19:38:04.077942   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.077950   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:04.077955   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:04.078022   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:04.130065   68995 cri.go:89] found id: ""
	I0719 19:38:04.130098   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.130109   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:04.130116   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:04.130181   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:04.162061   68995 cri.go:89] found id: ""
	I0719 19:38:04.162089   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.162097   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:04.162103   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:04.162163   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:04.194526   68995 cri.go:89] found id: ""
	I0719 19:38:04.194553   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.194560   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:04.194566   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:04.194613   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:04.226387   68995 cri.go:89] found id: ""
	I0719 19:38:04.226418   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.226429   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:04.226436   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:04.226569   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:04.263346   68995 cri.go:89] found id: ""
	I0719 19:38:04.263375   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.263393   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:04.263404   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:04.263420   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:04.321110   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:04.321140   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:04.334294   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:04.334318   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:04.398556   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:04.398577   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:04.398592   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:04.485108   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:04.485139   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:03.751057   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:05.751335   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:03.983215   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:06.482374   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:08.231511   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:10.232368   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:07.022235   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:07.034595   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:07.034658   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:07.067454   68995 cri.go:89] found id: ""
	I0719 19:38:07.067482   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.067492   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:07.067499   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:07.067556   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:07.101154   68995 cri.go:89] found id: ""
	I0719 19:38:07.101183   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.101192   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:07.101206   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:07.101269   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:07.136084   68995 cri.go:89] found id: ""
	I0719 19:38:07.136110   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.136118   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:07.136123   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:07.136173   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:07.173278   68995 cri.go:89] found id: ""
	I0719 19:38:07.173304   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.173312   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:07.173317   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:07.173380   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:07.209680   68995 cri.go:89] found id: ""
	I0719 19:38:07.209707   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.209719   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:07.209727   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:07.209788   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:07.242425   68995 cri.go:89] found id: ""
	I0719 19:38:07.242453   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.242462   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:07.242471   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:07.242538   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:07.276112   68995 cri.go:89] found id: ""
	I0719 19:38:07.276137   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.276147   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:07.276153   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:07.276216   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:07.309460   68995 cri.go:89] found id: ""
	I0719 19:38:07.309489   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.309500   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:07.309514   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:07.309528   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:07.361611   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:07.361650   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:07.375905   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:07.375942   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:07.441390   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:07.441412   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:07.441426   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:07.522023   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:07.522068   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:10.061774   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:10.074620   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:10.074705   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:10.107260   68995 cri.go:89] found id: ""
	I0719 19:38:10.107284   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.107293   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:10.107298   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:10.107345   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:10.142704   68995 cri.go:89] found id: ""
	I0719 19:38:10.142731   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.142738   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:10.142744   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:10.142793   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:10.174713   68995 cri.go:89] found id: ""
	I0719 19:38:10.174751   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.174762   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:10.174770   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:10.174837   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:10.205687   68995 cri.go:89] found id: ""
	I0719 19:38:10.205711   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.205719   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:10.205724   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:10.205782   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:10.238754   68995 cri.go:89] found id: ""
	I0719 19:38:10.238781   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.238789   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:10.238795   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:10.238844   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:10.271239   68995 cri.go:89] found id: ""
	I0719 19:38:10.271264   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.271274   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:10.271281   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:10.271380   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:10.306944   68995 cri.go:89] found id: ""
	I0719 19:38:10.306968   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.306975   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:10.306980   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:10.307030   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:10.336681   68995 cri.go:89] found id: ""
	I0719 19:38:10.336708   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.336716   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:10.336725   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:10.336736   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:10.376368   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:10.376405   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:10.432676   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:10.432722   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:10.446932   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:10.446965   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:10.514285   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:10.514307   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:10.514335   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:08.251480   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:10.252276   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:12.751674   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:08.482753   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:10.483719   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:12.982523   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:12.732172   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:15.232040   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:13.097845   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:13.110781   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:13.110862   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:13.144732   68995 cri.go:89] found id: ""
	I0719 19:38:13.144760   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.144772   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:13.144779   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:13.144840   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:13.176313   68995 cri.go:89] found id: ""
	I0719 19:38:13.176356   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.176364   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:13.176372   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:13.176437   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:13.208219   68995 cri.go:89] found id: ""
	I0719 19:38:13.208249   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.208260   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:13.208267   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:13.208324   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:13.242977   68995 cri.go:89] found id: ""
	I0719 19:38:13.243004   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.243014   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:13.243022   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:13.243080   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:13.275356   68995 cri.go:89] found id: ""
	I0719 19:38:13.275382   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.275389   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:13.275396   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:13.275443   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:13.310244   68995 cri.go:89] found id: ""
	I0719 19:38:13.310275   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.310283   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:13.310290   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:13.310356   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:13.342626   68995 cri.go:89] found id: ""
	I0719 19:38:13.342652   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.342660   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:13.342667   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:13.342737   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:13.374943   68995 cri.go:89] found id: ""
	I0719 19:38:13.374965   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.374973   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:13.374983   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:13.374993   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:13.429940   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:13.429975   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:13.443636   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:13.443677   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:13.514439   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:13.514468   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:13.514481   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:13.595337   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:13.595371   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:16.132302   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:16.145088   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:16.145148   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:16.176994   68995 cri.go:89] found id: ""
	I0719 19:38:16.177023   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.177034   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:16.177042   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:16.177103   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:16.208730   68995 cri.go:89] found id: ""
	I0719 19:38:16.208760   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.208771   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:16.208779   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:16.208843   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:16.244708   68995 cri.go:89] found id: ""
	I0719 19:38:16.244736   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.244747   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:16.244755   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:16.244816   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:16.279211   68995 cri.go:89] found id: ""
	I0719 19:38:16.279237   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.279245   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:16.279250   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:16.279328   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:16.312856   68995 cri.go:89] found id: ""
	I0719 19:38:16.312891   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.312902   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:16.312912   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:16.312980   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:16.347598   68995 cri.go:89] found id: ""
	I0719 19:38:16.347623   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.347630   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:16.347635   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:16.347695   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:16.381927   68995 cri.go:89] found id: ""
	I0719 19:38:16.381957   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.381966   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:16.381973   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:16.382035   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:16.415020   68995 cri.go:89] found id: ""
	I0719 19:38:16.415053   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.415061   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:16.415071   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:16.415084   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:16.428268   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:16.428304   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:16.498869   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:16.498887   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:16.498902   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:16.581237   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:16.581271   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:16.616727   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:16.616755   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:15.251690   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:17.752442   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:15.483131   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:17.484337   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:17.232857   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:19.233157   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:19.172304   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:19.185206   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:19.185282   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:19.218156   68995 cri.go:89] found id: ""
	I0719 19:38:19.218189   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.218200   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:19.218207   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:19.218277   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:19.251173   68995 cri.go:89] found id: ""
	I0719 19:38:19.251201   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.251211   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:19.251219   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:19.251280   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:19.283537   68995 cri.go:89] found id: ""
	I0719 19:38:19.283566   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.283578   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:19.283586   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:19.283654   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:19.315995   68995 cri.go:89] found id: ""
	I0719 19:38:19.316026   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.316047   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:19.316054   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:19.316130   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:19.348917   68995 cri.go:89] found id: ""
	I0719 19:38:19.348940   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.348949   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:19.348954   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:19.349001   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:19.382174   68995 cri.go:89] found id: ""
	I0719 19:38:19.382206   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.382220   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:19.382228   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:19.382290   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:19.416178   68995 cri.go:89] found id: ""
	I0719 19:38:19.416208   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.416217   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:19.416222   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:19.416268   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:19.449333   68995 cri.go:89] found id: ""
	I0719 19:38:19.449364   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.449375   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:19.449385   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:19.449397   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:19.503673   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:19.503708   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:19.517815   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:19.517847   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:19.583811   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:19.583849   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:19.583865   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:19.658742   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:19.658780   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:20.252451   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:22.752922   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:19.982854   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:21.984114   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:21.732342   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:24.231957   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:26.232440   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:22.197792   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:22.211464   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:22.211526   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:22.248860   68995 cri.go:89] found id: ""
	I0719 19:38:22.248891   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.248899   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:22.248905   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:22.248967   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:22.293173   68995 cri.go:89] found id: ""
	I0719 19:38:22.293198   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.293206   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:22.293212   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:22.293268   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:22.328826   68995 cri.go:89] found id: ""
	I0719 19:38:22.328857   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.328868   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:22.328875   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:22.328939   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:22.361560   68995 cri.go:89] found id: ""
	I0719 19:38:22.361590   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.361601   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:22.361608   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:22.361668   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:22.400899   68995 cri.go:89] found id: ""
	I0719 19:38:22.400927   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.400935   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:22.400941   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:22.400988   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:22.437159   68995 cri.go:89] found id: ""
	I0719 19:38:22.437185   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.437195   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:22.437202   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:22.437264   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:22.472424   68995 cri.go:89] found id: ""
	I0719 19:38:22.472454   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.472464   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:22.472472   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:22.472532   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:22.507649   68995 cri.go:89] found id: ""
	I0719 19:38:22.507678   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.507688   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:22.507704   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:22.507718   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:22.559616   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:22.559649   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:22.573436   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:22.573468   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:22.643245   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:22.643279   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:22.643319   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:22.721532   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:22.721571   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:25.264435   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:25.277324   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:25.277400   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:25.308328   68995 cri.go:89] found id: ""
	I0719 19:38:25.308359   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.308370   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:25.308377   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:25.308429   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:25.344100   68995 cri.go:89] found id: ""
	I0719 19:38:25.344130   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.344140   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:25.344148   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:25.344207   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:25.379393   68995 cri.go:89] found id: ""
	I0719 19:38:25.379430   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.379440   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:25.379447   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:25.379495   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:25.417344   68995 cri.go:89] found id: ""
	I0719 19:38:25.417369   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.417377   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:25.417383   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:25.417467   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:25.450753   68995 cri.go:89] found id: ""
	I0719 19:38:25.450778   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.450787   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:25.450793   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:25.450845   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:25.483479   68995 cri.go:89] found id: ""
	I0719 19:38:25.483503   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.483512   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:25.483520   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:25.483585   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:25.514954   68995 cri.go:89] found id: ""
	I0719 19:38:25.514978   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.514986   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:25.514992   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:25.515040   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:25.547802   68995 cri.go:89] found id: ""
	I0719 19:38:25.547828   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.547847   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:25.547858   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:25.547872   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:25.599643   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:25.599679   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:25.612587   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:25.612613   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:25.680267   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:25.680283   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:25.680294   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:25.764185   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:25.764215   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:25.253957   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:27.751614   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:24.482926   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:26.982157   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:28.731943   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:30.732225   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:28.309615   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:28.322513   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:28.322590   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:28.354692   68995 cri.go:89] found id: ""
	I0719 19:38:28.354728   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.354738   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:28.354746   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:28.354819   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:28.392144   68995 cri.go:89] found id: ""
	I0719 19:38:28.392170   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.392180   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:28.392187   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:28.392245   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:28.428196   68995 cri.go:89] found id: ""
	I0719 19:38:28.428226   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.428235   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:28.428243   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:28.428303   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:28.460353   68995 cri.go:89] found id: ""
	I0719 19:38:28.460385   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.460396   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:28.460403   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:28.460465   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:28.493143   68995 cri.go:89] found id: ""
	I0719 19:38:28.493168   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.493177   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:28.493184   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:28.493240   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:28.527379   68995 cri.go:89] found id: ""
	I0719 19:38:28.527404   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.527414   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:28.527422   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:28.527477   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:28.561506   68995 cri.go:89] found id: ""
	I0719 19:38:28.561531   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.561547   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:28.561555   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:28.561615   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:28.593832   68995 cri.go:89] found id: ""
	I0719 19:38:28.593859   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.593868   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:28.593878   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:28.593894   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:28.606980   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:28.607011   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:28.680957   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:28.680984   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:28.680997   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:28.755705   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:28.755738   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:28.793362   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:28.793397   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:31.343185   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:31.355903   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:31.355980   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:31.390158   68995 cri.go:89] found id: ""
	I0719 19:38:31.390188   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.390198   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:31.390203   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:31.390256   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:31.422743   68995 cri.go:89] found id: ""
	I0719 19:38:31.422777   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.422788   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:31.422796   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:31.422858   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:31.457886   68995 cri.go:89] found id: ""
	I0719 19:38:31.457912   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.457922   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:31.457927   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:31.457987   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:31.493625   68995 cri.go:89] found id: ""
	I0719 19:38:31.493648   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.493655   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:31.493670   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:31.493724   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:31.525764   68995 cri.go:89] found id: ""
	I0719 19:38:31.525789   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.525797   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:31.525803   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:31.525857   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:31.558453   68995 cri.go:89] found id: ""
	I0719 19:38:31.558478   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.558489   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:31.558499   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:31.558565   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:31.597875   68995 cri.go:89] found id: ""
	I0719 19:38:31.597901   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.597911   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:31.597918   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:31.597979   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:31.631688   68995 cri.go:89] found id: ""
	I0719 19:38:31.631712   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.631722   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:31.631730   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:31.631744   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:31.696991   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:31.697017   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:31.697032   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:31.782668   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:31.782712   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:31.822052   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:31.822091   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:31.876818   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:31.876857   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:30.250702   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:32.251104   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:29.482806   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:31.483225   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:33.232825   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:35.731586   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:34.390680   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:34.403662   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:34.403718   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:34.437627   68995 cri.go:89] found id: ""
	I0719 19:38:34.437653   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.437661   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:34.437668   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:34.437722   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:34.470906   68995 cri.go:89] found id: ""
	I0719 19:38:34.470935   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.470947   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:34.470955   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:34.471019   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:34.504424   68995 cri.go:89] found id: ""
	I0719 19:38:34.504452   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.504464   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:34.504472   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:34.504530   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:34.538610   68995 cri.go:89] found id: ""
	I0719 19:38:34.538638   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.538649   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:34.538657   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:34.538728   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:34.571473   68995 cri.go:89] found id: ""
	I0719 19:38:34.571500   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.571511   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:34.571518   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:34.571592   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:34.608465   68995 cri.go:89] found id: ""
	I0719 19:38:34.608499   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.608509   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:34.608516   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:34.608581   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:34.650110   68995 cri.go:89] found id: ""
	I0719 19:38:34.650140   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.650151   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:34.650158   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:34.650220   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:34.683386   68995 cri.go:89] found id: ""
	I0719 19:38:34.683409   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.683416   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:34.683427   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:34.683443   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:34.736264   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:34.736294   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:34.751293   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:34.751338   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:34.818817   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:34.818838   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:34.818854   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:34.899383   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:34.899419   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:34.252210   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:36.754343   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:33.984014   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:35.984811   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:37.732623   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:40.231802   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:37.438455   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:37.451494   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:37.451563   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:37.485908   68995 cri.go:89] found id: ""
	I0719 19:38:37.485934   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.485943   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:37.485950   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:37.486006   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:37.519156   68995 cri.go:89] found id: ""
	I0719 19:38:37.519183   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.519193   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:37.519200   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:37.519257   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:37.561498   68995 cri.go:89] found id: ""
	I0719 19:38:37.561530   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.561541   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:37.561548   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:37.561608   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:37.598971   68995 cri.go:89] found id: ""
	I0719 19:38:37.599002   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.599012   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:37.599019   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:37.599075   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:37.641257   68995 cri.go:89] found id: ""
	I0719 19:38:37.641304   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.641315   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:37.641323   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:37.641376   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:37.676572   68995 cri.go:89] found id: ""
	I0719 19:38:37.676596   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.676605   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:37.676610   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:37.676671   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:37.712105   68995 cri.go:89] found id: ""
	I0719 19:38:37.712131   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.712139   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:37.712145   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:37.712201   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:37.747850   68995 cri.go:89] found id: ""
	I0719 19:38:37.747879   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.747889   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:37.747899   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:37.747912   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:37.800904   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:37.800939   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:37.814264   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:37.814293   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:37.891872   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:37.891897   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:37.891913   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:37.967997   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:37.968033   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:40.511969   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:40.525191   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:40.525259   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:40.559619   68995 cri.go:89] found id: ""
	I0719 19:38:40.559651   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.559663   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:40.559671   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:40.559730   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:40.592775   68995 cri.go:89] found id: ""
	I0719 19:38:40.592804   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.592812   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:40.592817   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:40.592873   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:40.625524   68995 cri.go:89] found id: ""
	I0719 19:38:40.625546   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.625553   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:40.625558   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:40.625624   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:40.662117   68995 cri.go:89] found id: ""
	I0719 19:38:40.662151   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.662163   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:40.662171   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:40.662231   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:40.694835   68995 cri.go:89] found id: ""
	I0719 19:38:40.694868   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.694877   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:40.694885   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:40.694944   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:40.726879   68995 cri.go:89] found id: ""
	I0719 19:38:40.726905   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.726913   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:40.726919   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:40.726975   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:40.764663   68995 cri.go:89] found id: ""
	I0719 19:38:40.764688   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.764700   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:40.764707   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:40.764752   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:40.796684   68995 cri.go:89] found id: ""
	I0719 19:38:40.796711   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.796718   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:40.796726   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:40.796736   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:40.834549   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:40.834588   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:40.886521   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:40.886561   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:40.900162   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:40.900199   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:40.969263   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:40.969285   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:40.969319   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:39.250513   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:41.251724   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:38.482887   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:40.484413   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:42.982430   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:42.232190   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:44.232617   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:43.545375   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:43.559091   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:43.559174   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:43.591901   68995 cri.go:89] found id: ""
	I0719 19:38:43.591929   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.591939   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:43.591946   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:43.592009   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:43.656005   68995 cri.go:89] found id: ""
	I0719 19:38:43.656035   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.656046   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:43.656055   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:43.656126   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:43.690049   68995 cri.go:89] found id: ""
	I0719 19:38:43.690073   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.690085   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:43.690092   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:43.690150   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:43.723449   68995 cri.go:89] found id: ""
	I0719 19:38:43.723475   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.723483   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:43.723490   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:43.723565   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:43.759683   68995 cri.go:89] found id: ""
	I0719 19:38:43.759710   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.759718   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:43.759724   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:43.759781   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:43.795127   68995 cri.go:89] found id: ""
	I0719 19:38:43.795154   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.795162   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:43.795179   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:43.795225   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:43.832493   68995 cri.go:89] found id: ""
	I0719 19:38:43.832522   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.832530   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:43.832536   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:43.832588   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:43.868651   68995 cri.go:89] found id: ""
	I0719 19:38:43.868686   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.868697   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:43.868709   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:43.868724   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:43.908054   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:43.908096   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:43.960508   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:43.960546   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:43.973793   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:43.973817   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:44.046185   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:44.046210   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:44.046224   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:46.628410   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:46.640786   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:46.640853   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:46.672921   68995 cri.go:89] found id: ""
	I0719 19:38:46.672950   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.672960   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:46.672967   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:46.673030   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:46.706817   68995 cri.go:89] found id: ""
	I0719 19:38:46.706846   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.706857   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:46.706864   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:46.706922   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:46.741130   68995 cri.go:89] found id: ""
	I0719 19:38:46.741151   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.741158   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:46.741163   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:46.741207   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:46.778639   68995 cri.go:89] found id: ""
	I0719 19:38:46.778664   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.778671   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:46.778677   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:46.778724   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:46.812605   68995 cri.go:89] found id: ""
	I0719 19:38:46.812632   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.812648   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:46.812656   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:46.812709   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:46.848459   68995 cri.go:89] found id: ""
	I0719 19:38:46.848490   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.848499   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:46.848506   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:46.848554   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:46.883444   68995 cri.go:89] found id: ""
	I0719 19:38:46.883475   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.883486   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:46.883494   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:46.883553   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:46.916932   68995 cri.go:89] found id: ""
	I0719 19:38:46.916959   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.916974   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:46.916983   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:46.916994   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:46.967243   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:46.967273   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:46.980786   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:46.980814   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 19:38:43.253266   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:45.752678   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:45.482055   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:47.482299   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:46.733205   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:49.232365   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	W0719 19:38:47.050672   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:47.050698   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:47.050713   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:47.136521   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:47.136557   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:49.673673   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:49.687071   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:49.687150   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:49.720268   68995 cri.go:89] found id: ""
	I0719 19:38:49.720296   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.720305   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:49.720313   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:49.720375   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:49.754485   68995 cri.go:89] found id: ""
	I0719 19:38:49.754512   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.754520   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:49.754527   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:49.754595   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:49.786533   68995 cri.go:89] found id: ""
	I0719 19:38:49.786557   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.786566   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:49.786573   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:49.786637   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:49.819719   68995 cri.go:89] found id: ""
	I0719 19:38:49.819749   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.819758   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:49.819764   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:49.819829   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:49.856826   68995 cri.go:89] found id: ""
	I0719 19:38:49.856859   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.856869   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:49.856876   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:49.856942   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:49.891737   68995 cri.go:89] found id: ""
	I0719 19:38:49.891763   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.891774   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:49.891781   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:49.891860   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:49.924660   68995 cri.go:89] found id: ""
	I0719 19:38:49.924690   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.924699   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:49.924704   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:49.924761   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:49.956896   68995 cri.go:89] found id: ""
	I0719 19:38:49.956925   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.956936   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:49.956946   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:49.956959   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:50.012836   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:50.012867   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:50.028269   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:50.028292   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:50.128352   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:50.128380   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:50.128392   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:50.205971   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:50.206004   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:48.252120   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:50.751581   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:49.482679   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:51.984263   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:51.732882   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:54.232300   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:56.235372   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:52.742970   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:52.756222   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:52.756294   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:52.789556   68995 cri.go:89] found id: ""
	I0719 19:38:52.789585   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.789595   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:52.789601   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:52.789660   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:52.820745   68995 cri.go:89] found id: ""
	I0719 19:38:52.820790   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.820803   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:52.820811   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:52.820878   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:52.854667   68995 cri.go:89] found id: ""
	I0719 19:38:52.854698   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.854709   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:52.854716   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:52.854775   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:52.887125   68995 cri.go:89] found id: ""
	I0719 19:38:52.887152   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.887162   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:52.887167   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:52.887215   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:52.920824   68995 cri.go:89] found id: ""
	I0719 19:38:52.920861   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.920872   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:52.920878   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:52.920941   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:52.952819   68995 cri.go:89] found id: ""
	I0719 19:38:52.952848   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.952856   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:52.952861   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:52.952915   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:52.985508   68995 cri.go:89] found id: ""
	I0719 19:38:52.985535   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.985545   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:52.985553   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:52.985617   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:53.018633   68995 cri.go:89] found id: ""
	I0719 19:38:53.018660   68995 logs.go:276] 0 containers: []
	W0719 19:38:53.018669   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:53.018678   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:53.018690   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:53.071496   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:53.071533   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:53.084685   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:53.084709   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:53.146579   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:53.146603   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:53.146617   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:53.227521   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:53.227552   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:55.768276   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:55.782991   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:55.783075   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:55.825184   68995 cri.go:89] found id: ""
	I0719 19:38:55.825206   68995 logs.go:276] 0 containers: []
	W0719 19:38:55.825214   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:55.825220   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:55.825282   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:55.872997   68995 cri.go:89] found id: ""
	I0719 19:38:55.873024   68995 logs.go:276] 0 containers: []
	W0719 19:38:55.873033   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:55.873041   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:55.873106   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:55.911478   68995 cri.go:89] found id: ""
	I0719 19:38:55.911510   68995 logs.go:276] 0 containers: []
	W0719 19:38:55.911521   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:55.911530   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:55.911654   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:55.943930   68995 cri.go:89] found id: ""
	I0719 19:38:55.943960   68995 logs.go:276] 0 containers: []
	W0719 19:38:55.943971   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:55.943978   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:55.944046   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:55.977743   68995 cri.go:89] found id: ""
	I0719 19:38:55.977771   68995 logs.go:276] 0 containers: []
	W0719 19:38:55.977781   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:55.977788   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:55.977855   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:56.014436   68995 cri.go:89] found id: ""
	I0719 19:38:56.014467   68995 logs.go:276] 0 containers: []
	W0719 19:38:56.014479   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:56.014486   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:56.014561   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:56.047209   68995 cri.go:89] found id: ""
	I0719 19:38:56.047244   68995 logs.go:276] 0 containers: []
	W0719 19:38:56.047255   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:56.047263   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:56.047348   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:56.079632   68995 cri.go:89] found id: ""
	I0719 19:38:56.079655   68995 logs.go:276] 0 containers: []
	W0719 19:38:56.079663   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:56.079671   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:56.079685   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:56.153319   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:56.153343   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:56.153356   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:56.248187   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:56.248228   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:56.291329   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:56.291354   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:56.346462   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:56.346499   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:53.251881   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:55.751678   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:54.484732   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:56.982643   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:58.733025   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:00.731479   68617 pod_ready.go:81] duration metric: took 4m0.005795927s for pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace to be "Ready" ...
	E0719 19:39:00.731502   68617 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0719 19:39:00.731509   68617 pod_ready.go:38] duration metric: took 4m4.040942986s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:39:00.731523   68617 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:39:00.731547   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:39:00.731600   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:39:00.786510   68617 cri.go:89] found id: "82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5"
	I0719 19:39:00.786531   68617 cri.go:89] found id: ""
	I0719 19:39:00.786539   68617 logs.go:276] 1 containers: [82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5]
	I0719 19:39:00.786595   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:00.791234   68617 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:39:00.791294   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:39:00.826465   68617 cri.go:89] found id: "46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c"
	I0719 19:39:00.826492   68617 cri.go:89] found id: ""
	I0719 19:39:00.826503   68617 logs.go:276] 1 containers: [46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c]
	I0719 19:39:00.826562   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:00.830418   68617 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:39:00.830481   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:39:00.865831   68617 cri.go:89] found id: "9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb"
	I0719 19:39:00.865851   68617 cri.go:89] found id: ""
	I0719 19:39:00.865859   68617 logs.go:276] 1 containers: [9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb]
	I0719 19:39:00.865903   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:00.870162   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:39:00.870209   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:39:00.912742   68617 cri.go:89] found id: "560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e"
	I0719 19:39:00.912766   68617 cri.go:89] found id: ""
	I0719 19:39:00.912775   68617 logs.go:276] 1 containers: [560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e]
	I0719 19:39:00.912829   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:00.916725   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:39:00.916777   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:39:00.951824   68617 cri.go:89] found id: "4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e"
	I0719 19:39:00.951861   68617 cri.go:89] found id: ""
	I0719 19:39:00.951871   68617 logs.go:276] 1 containers: [4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e]
	I0719 19:39:00.951920   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:00.955637   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:39:00.955702   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:39:00.990611   68617 cri.go:89] found id: "4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498"
	I0719 19:39:00.990637   68617 cri.go:89] found id: ""
	I0719 19:39:00.990645   68617 logs.go:276] 1 containers: [4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498]
	I0719 19:39:00.990690   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:00.994506   68617 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:39:00.994568   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:39:01.030959   68617 cri.go:89] found id: ""
	I0719 19:39:01.030991   68617 logs.go:276] 0 containers: []
	W0719 19:39:01.031001   68617 logs.go:278] No container was found matching "kindnet"
	I0719 19:39:01.031007   68617 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 19:39:01.031069   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 19:39:01.068897   68617 cri.go:89] found id: "aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040"
	I0719 19:39:01.068928   68617 cri.go:89] found id: "f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91"
	I0719 19:39:01.068934   68617 cri.go:89] found id: ""
	I0719 19:39:01.068945   68617 logs.go:276] 2 containers: [aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040 f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91]
	I0719 19:39:01.069006   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:01.073087   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:01.076976   68617 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:39:01.076994   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:58.861739   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:58.874774   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:58.874842   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:58.913102   68995 cri.go:89] found id: ""
	I0719 19:38:58.913130   68995 logs.go:276] 0 containers: []
	W0719 19:38:58.913138   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:58.913143   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:58.913201   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:58.949367   68995 cri.go:89] found id: ""
	I0719 19:38:58.949397   68995 logs.go:276] 0 containers: []
	W0719 19:38:58.949407   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:58.949415   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:58.949483   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:58.984133   68995 cri.go:89] found id: ""
	I0719 19:38:58.984152   68995 logs.go:276] 0 containers: []
	W0719 19:38:58.984158   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:58.984164   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:58.984224   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:59.016356   68995 cri.go:89] found id: ""
	I0719 19:38:59.016383   68995 logs.go:276] 0 containers: []
	W0719 19:38:59.016393   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:59.016401   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:59.016469   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:59.049981   68995 cri.go:89] found id: ""
	I0719 19:38:59.050009   68995 logs.go:276] 0 containers: []
	W0719 19:38:59.050017   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:59.050027   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:59.050087   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:59.082248   68995 cri.go:89] found id: ""
	I0719 19:38:59.082270   68995 logs.go:276] 0 containers: []
	W0719 19:38:59.082278   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:59.082284   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:59.082358   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:59.115980   68995 cri.go:89] found id: ""
	I0719 19:38:59.116011   68995 logs.go:276] 0 containers: []
	W0719 19:38:59.116021   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:59.116028   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:59.116102   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:59.148379   68995 cri.go:89] found id: ""
	I0719 19:38:59.148404   68995 logs.go:276] 0 containers: []
	W0719 19:38:59.148412   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:59.148420   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:59.148432   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:59.161066   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:59.161099   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:59.225628   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:59.225658   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:59.225669   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:59.311640   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:59.311682   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:59.360964   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:59.360997   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:39:01.915091   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:39:01.928928   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:39:01.928987   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:39:01.962436   68995 cri.go:89] found id: ""
	I0719 19:39:01.962472   68995 logs.go:276] 0 containers: []
	W0719 19:39:01.962483   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:39:01.962490   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:39:01.962559   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:58.251636   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:00.752163   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:58.983433   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:01.482798   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:01.626674   68617 logs.go:123] Gathering logs for kube-apiserver [82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5] ...
	I0719 19:39:01.626731   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5"
	I0719 19:39:01.670756   68617 logs.go:123] Gathering logs for coredns [9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb] ...
	I0719 19:39:01.670787   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb"
	I0719 19:39:01.706541   68617 logs.go:123] Gathering logs for kube-controller-manager [4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498] ...
	I0719 19:39:01.706578   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498"
	I0719 19:39:01.756534   68617 logs.go:123] Gathering logs for etcd [46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c] ...
	I0719 19:39:01.756570   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c"
	I0719 19:39:01.805577   68617 logs.go:123] Gathering logs for kube-scheduler [560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e] ...
	I0719 19:39:01.805613   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e"
	I0719 19:39:01.839258   68617 logs.go:123] Gathering logs for kube-proxy [4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e] ...
	I0719 19:39:01.839288   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e"
	I0719 19:39:01.878371   68617 logs.go:123] Gathering logs for storage-provisioner [aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040] ...
	I0719 19:39:01.878404   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040"
	I0719 19:39:01.917151   68617 logs.go:123] Gathering logs for storage-provisioner [f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91] ...
	I0719 19:39:01.917183   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91"
	I0719 19:39:01.956111   68617 logs.go:123] Gathering logs for kubelet ...
	I0719 19:39:01.956149   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:39:02.023029   68617 logs.go:123] Gathering logs for dmesg ...
	I0719 19:39:02.023081   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:39:02.040889   68617 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:39:02.040920   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 19:39:02.188194   68617 logs.go:123] Gathering logs for container status ...
	I0719 19:39:02.188226   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:39:04.739149   68617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:39:04.757462   68617 api_server.go:72] duration metric: took 4m15.770324413s to wait for apiserver process to appear ...
	I0719 19:39:04.757482   68617 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:39:04.757512   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:39:04.757557   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:39:04.796517   68617 cri.go:89] found id: "82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5"
	I0719 19:39:04.796543   68617 cri.go:89] found id: ""
	I0719 19:39:04.796553   68617 logs.go:276] 1 containers: [82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5]
	I0719 19:39:04.796610   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:04.801976   68617 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:39:04.802046   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:39:04.842403   68617 cri.go:89] found id: "46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c"
	I0719 19:39:04.842425   68617 cri.go:89] found id: ""
	I0719 19:39:04.842434   68617 logs.go:276] 1 containers: [46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c]
	I0719 19:39:04.842487   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:04.846620   68617 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:39:04.846694   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:39:04.885318   68617 cri.go:89] found id: "9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb"
	I0719 19:39:04.885343   68617 cri.go:89] found id: ""
	I0719 19:39:04.885353   68617 logs.go:276] 1 containers: [9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb]
	I0719 19:39:04.885411   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:04.889813   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:39:04.889883   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:39:04.928174   68617 cri.go:89] found id: "560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e"
	I0719 19:39:04.928201   68617 cri.go:89] found id: ""
	I0719 19:39:04.928211   68617 logs.go:276] 1 containers: [560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e]
	I0719 19:39:04.928278   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:04.932724   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:39:04.932807   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:39:04.970800   68617 cri.go:89] found id: "4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e"
	I0719 19:39:04.970823   68617 cri.go:89] found id: ""
	I0719 19:39:04.970830   68617 logs.go:276] 1 containers: [4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e]
	I0719 19:39:04.970882   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:04.975294   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:39:04.975369   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:39:05.012666   68617 cri.go:89] found id: "4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498"
	I0719 19:39:05.012690   68617 cri.go:89] found id: ""
	I0719 19:39:05.012699   68617 logs.go:276] 1 containers: [4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498]
	I0719 19:39:05.012766   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:05.016841   68617 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:39:05.016914   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:39:05.061117   68617 cri.go:89] found id: ""
	I0719 19:39:05.061141   68617 logs.go:276] 0 containers: []
	W0719 19:39:05.061151   68617 logs.go:278] No container was found matching "kindnet"
	I0719 19:39:05.061157   68617 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 19:39:05.061216   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 19:39:05.102934   68617 cri.go:89] found id: "aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040"
	I0719 19:39:05.102959   68617 cri.go:89] found id: "f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91"
	I0719 19:39:05.102965   68617 cri.go:89] found id: ""
	I0719 19:39:05.102974   68617 logs.go:276] 2 containers: [aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040 f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91]
	I0719 19:39:05.103029   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:05.107192   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:05.110970   68617 logs.go:123] Gathering logs for kube-scheduler [560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e] ...
	I0719 19:39:05.110990   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e"
	I0719 19:39:05.152862   68617 logs.go:123] Gathering logs for storage-provisioner [aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040] ...
	I0719 19:39:05.152900   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040"
	I0719 19:39:05.194080   68617 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:39:05.194110   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:39:05.657623   68617 logs.go:123] Gathering logs for container status ...
	I0719 19:39:05.657659   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:39:05.703773   68617 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:39:05.703802   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 19:39:05.810811   68617 logs.go:123] Gathering logs for etcd [46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c] ...
	I0719 19:39:05.810842   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c"
	I0719 19:39:05.859793   68617 logs.go:123] Gathering logs for kube-apiserver [82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5] ...
	I0719 19:39:05.859850   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5"
	I0719 19:39:05.911578   68617 logs.go:123] Gathering logs for coredns [9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb] ...
	I0719 19:39:05.911606   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb"
	I0719 19:39:05.947358   68617 logs.go:123] Gathering logs for kube-proxy [4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e] ...
	I0719 19:39:05.947384   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e"
	I0719 19:39:05.990731   68617 logs.go:123] Gathering logs for kube-controller-manager [4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498] ...
	I0719 19:39:05.990761   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498"
	I0719 19:39:06.041223   68617 logs.go:123] Gathering logs for storage-provisioner [f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91] ...
	I0719 19:39:06.041253   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91"
	I0719 19:39:06.076037   68617 logs.go:123] Gathering logs for kubelet ...
	I0719 19:39:06.076068   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:39:06.131177   68617 logs.go:123] Gathering logs for dmesg ...
	I0719 19:39:06.131213   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:39:02.003045   68995 cri.go:89] found id: ""
	I0719 19:39:02.003078   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.003090   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:39:02.003098   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:39:02.003160   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:39:02.041835   68995 cri.go:89] found id: ""
	I0719 19:39:02.041863   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.041880   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:39:02.041888   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:39:02.041954   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:39:02.080285   68995 cri.go:89] found id: ""
	I0719 19:39:02.080340   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.080353   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:39:02.080361   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:39:02.080431   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:39:02.116225   68995 cri.go:89] found id: ""
	I0719 19:39:02.116308   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.116323   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:39:02.116331   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:39:02.116390   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:39:02.154818   68995 cri.go:89] found id: ""
	I0719 19:39:02.154848   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.154860   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:39:02.154868   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:39:02.154931   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:39:02.190689   68995 cri.go:89] found id: ""
	I0719 19:39:02.190715   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.190727   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:39:02.190741   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:39:02.190810   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:39:02.224155   68995 cri.go:89] found id: ""
	I0719 19:39:02.224185   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.224196   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:39:02.224207   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:39:02.224221   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:39:02.295623   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:39:02.295649   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:39:02.295665   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:39:02.378354   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:39:02.378390   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:39:02.416194   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:39:02.416222   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:39:02.470997   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:39:02.471039   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:39:04.985908   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:39:04.998485   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:39:04.998556   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:39:05.032776   68995 cri.go:89] found id: ""
	I0719 19:39:05.032801   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.032812   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:39:05.032819   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:39:05.032878   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:39:05.066650   68995 cri.go:89] found id: ""
	I0719 19:39:05.066677   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.066686   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:39:05.066693   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:39:05.066771   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:39:05.103689   68995 cri.go:89] found id: ""
	I0719 19:39:05.103712   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.103720   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:39:05.103728   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:39:05.103786   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:39:05.141739   68995 cri.go:89] found id: ""
	I0719 19:39:05.141767   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.141778   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:39:05.141785   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:39:05.141844   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:39:05.178535   68995 cri.go:89] found id: ""
	I0719 19:39:05.178566   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.178576   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:39:05.178582   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:39:05.178640   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:39:05.213876   68995 cri.go:89] found id: ""
	I0719 19:39:05.213904   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.213914   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:39:05.213921   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:39:05.213980   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:39:05.255039   68995 cri.go:89] found id: ""
	I0719 19:39:05.255063   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.255070   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:39:05.255076   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:39:05.255131   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:39:05.288508   68995 cri.go:89] found id: ""
	I0719 19:39:05.288536   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.288547   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:39:05.288558   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:39:05.288572   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:39:05.301079   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:39:05.301109   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:39:05.368109   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:39:05.368128   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:39:05.368140   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:39:05.453326   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:39:05.453360   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:39:05.494780   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:39:05.494814   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:39:03.253971   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:05.751964   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:03.983599   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:06.482412   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:08.646117   68617 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0719 19:39:08.651440   68617 api_server.go:279] https://192.168.61.107:8443/healthz returned 200:
	ok
	I0719 19:39:08.652517   68617 api_server.go:141] control plane version: v1.30.3
	I0719 19:39:08.652538   68617 api_server.go:131] duration metric: took 3.895047319s to wait for apiserver health ...
	I0719 19:39:08.652545   68617 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:39:08.652568   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:39:08.652629   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:39:08.688312   68617 cri.go:89] found id: "82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5"
	I0719 19:39:08.688356   68617 cri.go:89] found id: ""
	I0719 19:39:08.688366   68617 logs.go:276] 1 containers: [82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5]
	I0719 19:39:08.688424   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.692993   68617 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:39:08.693046   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:39:08.736900   68617 cri.go:89] found id: "46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c"
	I0719 19:39:08.736920   68617 cri.go:89] found id: ""
	I0719 19:39:08.736927   68617 logs.go:276] 1 containers: [46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c]
	I0719 19:39:08.736981   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.741409   68617 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:39:08.741473   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:39:08.780867   68617 cri.go:89] found id: "9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb"
	I0719 19:39:08.780885   68617 cri.go:89] found id: ""
	I0719 19:39:08.780892   68617 logs.go:276] 1 containers: [9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb]
	I0719 19:39:08.780935   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.785161   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:39:08.785233   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:39:08.818139   68617 cri.go:89] found id: "560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e"
	I0719 19:39:08.818161   68617 cri.go:89] found id: ""
	I0719 19:39:08.818170   68617 logs.go:276] 1 containers: [560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e]
	I0719 19:39:08.818225   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.822066   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:39:08.822116   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:39:08.855310   68617 cri.go:89] found id: "4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e"
	I0719 19:39:08.855333   68617 cri.go:89] found id: ""
	I0719 19:39:08.855344   68617 logs.go:276] 1 containers: [4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e]
	I0719 19:39:08.855402   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.859647   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:39:08.859716   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:39:08.897948   68617 cri.go:89] found id: "4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498"
	I0719 19:39:08.897973   68617 cri.go:89] found id: ""
	I0719 19:39:08.897981   68617 logs.go:276] 1 containers: [4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498]
	I0719 19:39:08.898030   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.901861   68617 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:39:08.901909   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:39:08.940408   68617 cri.go:89] found id: ""
	I0719 19:39:08.940440   68617 logs.go:276] 0 containers: []
	W0719 19:39:08.940451   68617 logs.go:278] No container was found matching "kindnet"
	I0719 19:39:08.940458   68617 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 19:39:08.940516   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 19:39:08.979077   68617 cri.go:89] found id: "aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040"
	I0719 19:39:08.979104   68617 cri.go:89] found id: "f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91"
	I0719 19:39:08.979109   68617 cri.go:89] found id: ""
	I0719 19:39:08.979118   68617 logs.go:276] 2 containers: [aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040 f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91]
	I0719 19:39:08.979185   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.983947   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.987604   68617 logs.go:123] Gathering logs for coredns [9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb] ...
	I0719 19:39:08.987624   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb"
	I0719 19:39:09.021803   68617 logs.go:123] Gathering logs for kube-proxy [4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e] ...
	I0719 19:39:09.021834   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e"
	I0719 19:39:09.064650   68617 logs.go:123] Gathering logs for kube-controller-manager [4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498] ...
	I0719 19:39:09.064677   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498"
	I0719 19:39:09.120779   68617 logs.go:123] Gathering logs for storage-provisioner [f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91] ...
	I0719 19:39:09.120834   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91"
	I0719 19:39:09.155898   68617 logs.go:123] Gathering logs for kubelet ...
	I0719 19:39:09.155926   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:39:09.210461   68617 logs.go:123] Gathering logs for dmesg ...
	I0719 19:39:09.210490   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:39:09.224154   68617 logs.go:123] Gathering logs for kube-apiserver [82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5] ...
	I0719 19:39:09.224181   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5"
	I0719 19:39:09.266941   68617 logs.go:123] Gathering logs for storage-provisioner [aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040] ...
	I0719 19:39:09.266974   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040"
	I0719 19:39:09.306982   68617 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:39:09.307011   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:39:09.689805   68617 logs.go:123] Gathering logs for container status ...
	I0719 19:39:09.689848   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:39:09.733995   68617 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:39:09.734034   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 19:39:09.842306   68617 logs.go:123] Gathering logs for etcd [46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c] ...
	I0719 19:39:09.842339   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c"
	I0719 19:39:09.889811   68617 logs.go:123] Gathering logs for kube-scheduler [560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e] ...
	I0719 19:39:09.889842   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e"
	I0719 19:39:08.056065   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:39:08.070370   68995 kubeadm.go:597] duration metric: took 4m4.520130366s to restartPrimaryControlPlane
	W0719 19:39:08.070441   68995 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 19:39:08.070473   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 19:39:12.432441   68617 system_pods.go:59] 8 kube-system pods found
	I0719 19:39:12.432467   68617 system_pods.go:61] "coredns-7db6d8ff4d-7h48w" [dd691a13-e571-4148-bef0-ba0552277312] Running
	I0719 19:39:12.432472   68617 system_pods.go:61] "etcd-embed-certs-889918" [68eae3c2-11de-4c70-bc28-1f34e1e868fc] Running
	I0719 19:39:12.432476   68617 system_pods.go:61] "kube-apiserver-embed-certs-889918" [48f50846-bc9e-444e-a569-98d28550dcff] Running
	I0719 19:39:12.432480   68617 system_pods.go:61] "kube-controller-manager-embed-certs-889918" [92e4da69-acef-44a2-9301-726c70294727] Running
	I0719 19:39:12.432484   68617 system_pods.go:61] "kube-proxy-5fcgs" [67ef345b-f2ef-4e18-a55c-91e2c08a37b8] Running
	I0719 19:39:12.432487   68617 system_pods.go:61] "kube-scheduler-embed-certs-889918" [f5d74873-1493-42f7-92a7-5d2b98929025] Running
	I0719 19:39:12.432493   68617 system_pods.go:61] "metrics-server-569cc877fc-sfc85" [fc2a1efe-1728-42d9-bd3d-9eb29af783e9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:39:12.432498   68617 system_pods.go:61] "storage-provisioner" [32274055-6a2d-4cf9-8ae8-3c5f7cdc66db] Running
	I0719 19:39:12.432503   68617 system_pods.go:74] duration metric: took 3.779953026s to wait for pod list to return data ...
	I0719 19:39:12.432509   68617 default_sa.go:34] waiting for default service account to be created ...
	I0719 19:39:12.434809   68617 default_sa.go:45] found service account: "default"
	I0719 19:39:12.434826   68617 default_sa.go:55] duration metric: took 2.3111ms for default service account to be created ...
	I0719 19:39:12.434833   68617 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 19:39:12.440652   68617 system_pods.go:86] 8 kube-system pods found
	I0719 19:39:12.440673   68617 system_pods.go:89] "coredns-7db6d8ff4d-7h48w" [dd691a13-e571-4148-bef0-ba0552277312] Running
	I0719 19:39:12.440678   68617 system_pods.go:89] "etcd-embed-certs-889918" [68eae3c2-11de-4c70-bc28-1f34e1e868fc] Running
	I0719 19:39:12.440683   68617 system_pods.go:89] "kube-apiserver-embed-certs-889918" [48f50846-bc9e-444e-a569-98d28550dcff] Running
	I0719 19:39:12.440689   68617 system_pods.go:89] "kube-controller-manager-embed-certs-889918" [92e4da69-acef-44a2-9301-726c70294727] Running
	I0719 19:39:12.440696   68617 system_pods.go:89] "kube-proxy-5fcgs" [67ef345b-f2ef-4e18-a55c-91e2c08a37b8] Running
	I0719 19:39:12.440707   68617 system_pods.go:89] "kube-scheduler-embed-certs-889918" [f5d74873-1493-42f7-92a7-5d2b98929025] Running
	I0719 19:39:12.440717   68617 system_pods.go:89] "metrics-server-569cc877fc-sfc85" [fc2a1efe-1728-42d9-bd3d-9eb29af783e9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:39:12.440728   68617 system_pods.go:89] "storage-provisioner" [32274055-6a2d-4cf9-8ae8-3c5f7cdc66db] Running
	I0719 19:39:12.440737   68617 system_pods.go:126] duration metric: took 5.899051ms to wait for k8s-apps to be running ...
	I0719 19:39:12.440748   68617 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 19:39:12.440789   68617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:39:12.459186   68617 system_svc.go:56] duration metric: took 18.42941ms WaitForService to wait for kubelet
	I0719 19:39:12.459214   68617 kubeadm.go:582] duration metric: took 4m23.472080305s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 19:39:12.459237   68617 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:39:12.462522   68617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:39:12.462542   68617 node_conditions.go:123] node cpu capacity is 2
	I0719 19:39:12.462552   68617 node_conditions.go:105] duration metric: took 3.31009ms to run NodePressure ...
	I0719 19:39:12.462568   68617 start.go:241] waiting for startup goroutines ...
	I0719 19:39:12.462577   68617 start.go:246] waiting for cluster config update ...
	I0719 19:39:12.462590   68617 start.go:255] writing updated cluster config ...
	I0719 19:39:12.462907   68617 ssh_runner.go:195] Run: rm -f paused
	I0719 19:39:12.529076   68617 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 19:39:12.531440   68617 out.go:177] * Done! kubectl is now configured to use "embed-certs-889918" cluster and "default" namespace by default
	I0719 19:39:08.251071   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:10.251424   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:12.252078   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:12.404491   68995 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.333989512s)
	I0719 19:39:12.404566   68995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:39:12.418839   68995 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:39:12.430297   68995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:39:12.440268   68995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:39:12.440287   68995 kubeadm.go:157] found existing configuration files:
	
	I0719 19:39:12.440335   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:39:12.449116   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:39:12.449184   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:39:12.460572   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:39:12.471305   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:39:12.471362   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:39:12.482167   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:39:12.493823   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:39:12.493905   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:39:12.505544   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:39:12.517178   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:39:12.517240   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:39:12.529271   68995 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 19:39:12.620678   68995 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0719 19:39:12.620745   68995 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 19:39:12.758666   68995 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 19:39:12.758855   68995 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 19:39:12.759023   68995 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 19:39:12.939168   68995 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 19:39:08.482667   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:10.983631   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:12.941111   68995 out.go:204]   - Generating certificates and keys ...
	I0719 19:39:12.941226   68995 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 19:39:12.941319   68995 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 19:39:12.941423   68995 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 19:39:12.941537   68995 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 19:39:12.941651   68995 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 19:39:12.941754   68995 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 19:39:12.941851   68995 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 19:39:12.941935   68995 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 19:39:12.942048   68995 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 19:39:12.942156   68995 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 19:39:12.942238   68995 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 19:39:12.942336   68995 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 19:39:13.024175   68995 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 19:39:13.148395   68995 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 19:39:13.342824   68995 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 19:39:13.440755   68995 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 19:39:13.455869   68995 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 19:39:13.456830   68995 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 19:39:13.456907   68995 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 19:39:13.600063   68995 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 19:39:13.602239   68995 out.go:204]   - Booting up control plane ...
	I0719 19:39:13.602381   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 19:39:13.608875   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 19:39:13.609791   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 19:39:13.610664   68995 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 19:39:13.612681   68995 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 19:39:14.252185   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:16.252337   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:13.484347   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:15.983299   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:17.983517   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:18.751634   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:21.252050   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:20.482791   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:21.477050   68536 pod_ready.go:81] duration metric: took 4m0.000605179s for pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace to be "Ready" ...
	E0719 19:39:21.477076   68536 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace to be "Ready" (will not retry!)
	I0719 19:39:21.477094   68536 pod_ready.go:38] duration metric: took 4m14.545379745s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:39:21.477123   68536 kubeadm.go:597] duration metric: took 5m1.629449662s to restartPrimaryControlPlane
	W0719 19:39:21.477193   68536 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 19:39:21.477215   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 19:39:23.753986   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:26.251761   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:28.252580   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:30.755898   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:33.251879   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:35.750822   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:37.752455   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:40.251762   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:42.752421   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:45.251253   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:47.752059   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:50.253519   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:51.745969   68019 pod_ready.go:81] duration metric: took 4m0.000948888s for pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace to be "Ready" ...
	E0719 19:39:51.746010   68019 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0719 19:39:51.746030   68019 pod_ready.go:38] duration metric: took 4m12.537161917s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:39:51.746056   68019 kubeadm.go:597] duration metric: took 4m19.596455979s to restartPrimaryControlPlane
	W0719 19:39:51.746106   68019 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 19:39:51.746131   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 19:39:52.520817   68536 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.043581976s)
	I0719 19:39:52.520887   68536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:39:52.536043   68536 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:39:52.545993   68536 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:39:52.555648   68536 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:39:52.555669   68536 kubeadm.go:157] found existing configuration files:
	
	I0719 19:39:52.555711   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0719 19:39:52.564602   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:39:52.564712   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:39:52.573899   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0719 19:39:52.582537   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:39:52.582588   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:39:52.591303   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0719 19:39:52.600820   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:39:52.600868   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:39:52.610312   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0719 19:39:52.618924   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:39:52.618970   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:39:52.627840   68536 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 19:39:52.683442   68536 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0719 19:39:52.683530   68536 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 19:39:52.826305   68536 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 19:39:52.826452   68536 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 19:39:52.826620   68536 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 19:39:53.042595   68536 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 19:39:53.044702   68536 out.go:204]   - Generating certificates and keys ...
	I0719 19:39:53.044784   68536 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 19:39:53.044870   68536 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 19:39:53.044980   68536 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 19:39:53.045077   68536 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 19:39:53.045156   68536 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 19:39:53.045204   68536 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 19:39:53.045263   68536 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 19:39:53.045340   68536 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 19:39:53.045435   68536 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 19:39:53.045541   68536 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 19:39:53.045606   68536 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 19:39:53.045764   68536 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 19:39:53.222737   68536 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 19:39:53.489072   68536 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 19:39:53.740066   68536 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 19:39:53.822505   68536 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 19:39:53.971335   68536 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 19:39:53.971975   68536 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 19:39:53.976245   68536 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 19:39:53.614108   68995 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0719 19:39:53.614700   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:39:53.614946   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:39:53.978300   68536 out.go:204]   - Booting up control plane ...
	I0719 19:39:53.978437   68536 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 19:39:53.978565   68536 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 19:39:53.978663   68536 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 19:39:53.996201   68536 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 19:39:53.997047   68536 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 19:39:53.997148   68536 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 19:39:54.130113   68536 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 19:39:54.130245   68536 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 19:39:55.132570   68536 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002320915s
	I0719 19:39:55.132686   68536 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 19:39:59.636140   68536 kubeadm.go:310] [api-check] The API server is healthy after 4.50221298s
	I0719 19:39:59.654383   68536 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 19:39:59.676263   68536 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 19:39:59.700938   68536 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 19:39:59.701195   68536 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-578533 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 19:39:59.712009   68536 kubeadm.go:310] [bootstrap-token] Using token: kew56y.gotqvfalggpgzbky
	I0719 19:39:59.713486   68536 out.go:204]   - Configuring RBAC rules ...
	I0719 19:39:59.713629   68536 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 19:39:59.720605   68536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 19:39:59.726901   68536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 19:39:59.730021   68536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 19:39:59.733042   68536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 19:39:59.736260   68536 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 19:40:00.042122   68536 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 19:40:00.496191   68536 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 19:40:01.043927   68536 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 19:40:01.043969   68536 kubeadm.go:310] 
	I0719 19:40:01.044072   68536 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 19:40:01.044086   68536 kubeadm.go:310] 
	I0719 19:40:01.044173   68536 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 19:40:01.044181   68536 kubeadm.go:310] 
	I0719 19:40:01.044201   68536 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 19:40:01.044250   68536 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 19:40:01.044339   68536 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 19:40:01.044356   68536 kubeadm.go:310] 
	I0719 19:40:01.044493   68536 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 19:40:01.044513   68536 kubeadm.go:310] 
	I0719 19:40:01.044584   68536 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 19:40:01.044594   68536 kubeadm.go:310] 
	I0719 19:40:01.044670   68536 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 19:40:01.044774   68536 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 19:40:01.044874   68536 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 19:40:01.044888   68536 kubeadm.go:310] 
	I0719 19:40:01.044975   68536 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 19:40:01.045047   68536 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 19:40:01.045058   68536 kubeadm.go:310] 
	I0719 19:40:01.045169   68536 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token kew56y.gotqvfalggpgzbky \
	I0719 19:40:01.045331   68536 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 \
	I0719 19:40:01.045367   68536 kubeadm.go:310] 	--control-plane 
	I0719 19:40:01.045372   68536 kubeadm.go:310] 
	I0719 19:40:01.045496   68536 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 19:40:01.045506   68536 kubeadm.go:310] 
	I0719 19:40:01.045607   68536 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token kew56y.gotqvfalggpgzbky \
	I0719 19:40:01.045766   68536 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 
	I0719 19:40:01.045950   68536 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 19:40:01.045974   68536 cni.go:84] Creating CNI manager for ""
	I0719 19:40:01.045983   68536 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:40:01.047577   68536 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 19:39:58.615471   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:39:58.615687   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:40:01.048902   68536 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 19:40:01.061165   68536 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 19:40:01.083745   68536 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 19:40:01.083880   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:01.083893   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-578533 minikube.k8s.io/updated_at=2024_07_19T19_40_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221 minikube.k8s.io/name=default-k8s-diff-port-578533 minikube.k8s.io/primary=true
	I0719 19:40:01.115489   68536 ops.go:34] apiserver oom_adj: -16
	I0719 19:40:01.288776   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:01.789023   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:02.289623   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:02.789147   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:03.289112   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:03.789167   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:04.289092   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:04.788811   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:05.289732   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:05.789486   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:06.289062   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:06.789641   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:07.289713   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:07.789100   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:08.289752   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:08.616294   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:40:08.616549   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:40:08.789060   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:09.288783   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:09.789011   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:10.289587   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:10.789028   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:11.288947   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:11.789756   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:12.288867   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:12.789023   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:13.289682   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:13.789104   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:13.872731   68536 kubeadm.go:1113] duration metric: took 12.788952534s to wait for elevateKubeSystemPrivileges
	I0719 19:40:13.872760   68536 kubeadm.go:394] duration metric: took 5m54.069849946s to StartCluster
	I0719 19:40:13.872777   68536 settings.go:142] acquiring lock: {Name:mkcf95271f2ac91a55c2f4ae0f8a0ccaa24777be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:40:13.872848   68536 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:40:13.874598   68536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/kubeconfig: {Name:mk0bbb5f0de0e816a151363ad4427bbf9de52f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:40:13.874875   68536 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.104 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 19:40:13.875049   68536 config.go:182] Loaded profile config "default-k8s-diff-port-578533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:40:13.875047   68536 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 19:40:13.875107   68536 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-578533"
	I0719 19:40:13.875116   68536 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-578533"
	I0719 19:40:13.875140   68536 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-578533"
	I0719 19:40:13.875142   68536 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-578533"
	I0719 19:40:13.875198   68536 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-578533"
	W0719 19:40:13.875211   68536 addons.go:243] addon metrics-server should already be in state true
	I0719 19:40:13.875141   68536 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-578533"
	W0719 19:40:13.875237   68536 addons.go:243] addon storage-provisioner should already be in state true
	I0719 19:40:13.875252   68536 host.go:66] Checking if "default-k8s-diff-port-578533" exists ...
	I0719 19:40:13.875272   68536 host.go:66] Checking if "default-k8s-diff-port-578533" exists ...
	I0719 19:40:13.875610   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.875620   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.875641   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.875648   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.875760   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.875808   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.876607   68536 out.go:177] * Verifying Kubernetes components...
	I0719 19:40:13.878074   68536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:40:13.891861   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44441
	I0719 19:40:13.891871   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33215
	I0719 19:40:13.892275   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.892348   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.892756   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.892776   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.892861   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.892880   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.893123   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.893188   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.893600   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.893620   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.893708   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.893741   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.895892   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33737
	I0719 19:40:13.896304   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.896825   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.896849   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.897241   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.897455   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetState
	I0719 19:40:13.901471   68536 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-578533"
	W0719 19:40:13.901492   68536 addons.go:243] addon default-storageclass should already be in state true
	I0719 19:40:13.901520   68536 host.go:66] Checking if "default-k8s-diff-port-578533" exists ...
	I0719 19:40:13.901886   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.901916   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.909214   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44955
	I0719 19:40:13.909602   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.909706   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41381
	I0719 19:40:13.910014   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.910034   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.910088   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.910307   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.910493   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetState
	I0719 19:40:13.910626   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.910647   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.911481   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.911755   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetState
	I0719 19:40:13.913629   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:40:13.914010   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:40:13.915656   68536 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0719 19:40:13.915658   68536 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:40:13.916749   68536 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 19:40:13.916766   68536 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 19:40:13.916796   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:40:13.917023   68536 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:40:13.917042   68536 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 19:40:13.917059   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:40:13.920356   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:40:13.921022   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:40:13.921051   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:40:13.921100   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:40:13.921342   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:40:13.921505   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:40:13.921690   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:40:13.921838   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:40:13.922025   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:40:13.922047   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:40:13.922369   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:40:13.922515   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:40:13.922679   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:40:13.922815   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:40:13.926929   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43701
	I0719 19:40:13.927323   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.927751   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.927771   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.928249   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.928833   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.928858   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.944683   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33103
	I0719 19:40:13.945111   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.945633   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.945660   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.946161   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.946386   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetState
	I0719 19:40:13.948243   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:40:13.948454   68536 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 19:40:13.948469   68536 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 19:40:13.948490   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:40:13.951289   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:40:13.951582   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:40:13.951617   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:40:13.951803   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:40:13.952029   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:40:13.952201   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:40:13.952391   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:40:14.070828   68536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:40:14.091145   68536 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-578533" to be "Ready" ...
	I0719 19:40:14.110080   68536 node_ready.go:49] node "default-k8s-diff-port-578533" has status "Ready":"True"
	I0719 19:40:14.110107   68536 node_ready.go:38] duration metric: took 18.918971ms for node "default-k8s-diff-port-578533" to be "Ready" ...
	I0719 19:40:14.110119   68536 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:40:14.119802   68536 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.126113   68536 pod_ready.go:92] pod "etcd-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:14.126142   68536 pod_ready.go:81] duration metric: took 6.313891ms for pod "etcd-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.126155   68536 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.141789   68536 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:14.141819   68536 pod_ready.go:81] duration metric: took 15.656544ms for pod "kube-apiserver-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.141834   68536 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.156519   68536 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:14.156551   68536 pod_ready.go:81] duration metric: took 14.707971ms for pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.156565   68536 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xd6jl" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.204678   68536 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 19:40:14.204699   68536 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0719 19:40:14.243288   68536 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 19:40:14.245353   68536 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 19:40:14.245379   68536 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 19:40:14.251097   68536 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:40:14.282349   68536 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 19:40:14.282380   68536 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 19:40:14.324904   68536 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 19:40:14.538669   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:14.538706   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:14.539085   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Closing plugin on server side
	I0719 19:40:14.539097   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:14.539112   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:14.539127   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:14.539141   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:14.539429   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:14.539450   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:14.539487   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Closing plugin on server side
	I0719 19:40:14.544896   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:14.544916   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:14.545172   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Closing plugin on server side
	I0719 19:40:14.545177   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:14.545191   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:15.063896   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:15.063917   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:15.064241   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:15.064253   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Closing plugin on server side
	I0719 19:40:15.064259   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:15.064276   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:15.064284   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:15.064494   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:15.064517   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:15.658117   68536 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.333159998s)
	I0719 19:40:15.658174   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:15.658189   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:15.658470   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:15.658491   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:15.658502   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:15.658512   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:15.658736   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:15.658753   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:15.658764   68536 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-578533"
	I0719 19:40:15.661136   68536 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0719 19:40:15.662344   68536 addons.go:510] duration metric: took 1.787285769s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0719 19:40:15.679066   68536 pod_ready.go:92] pod "kube-proxy-xd6jl" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:15.679088   68536 pod_ready.go:81] duration metric: took 1.52251532s for pod "kube-proxy-xd6jl" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:15.679114   68536 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:15.695199   68536 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:15.695221   68536 pod_ready.go:81] duration metric: took 16.099506ms for pod "kube-scheduler-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:15.695229   68536 pod_ready.go:38] duration metric: took 1.585098776s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:40:15.695241   68536 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:40:15.695297   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:40:15.746226   68536 api_server.go:72] duration metric: took 1.871305968s to wait for apiserver process to appear ...
	I0719 19:40:15.746254   68536 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:40:15.746277   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:40:15.761521   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 200:
	ok
	I0719 19:40:15.762619   68536 api_server.go:141] control plane version: v1.30.3
	I0719 19:40:15.762646   68536 api_server.go:131] duration metric: took 16.383582ms to wait for apiserver health ...
	I0719 19:40:15.762656   68536 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:40:15.900579   68536 system_pods.go:59] 9 kube-system pods found
	I0719 19:40:15.900627   68536 system_pods.go:61] "coredns-7db6d8ff4d-6qndw" [2b80cbbf-e1de-449d-affb-90c7e626e11a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 19:40:15.900639   68536 system_pods.go:61] "coredns-7db6d8ff4d-gmljj" [e3c23a75-7c98-4704-b159-465a46e71deb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 19:40:15.900648   68536 system_pods.go:61] "etcd-default-k8s-diff-port-578533" [2667fb3b-07c7-4feb-9c47-aec61d610319] Running
	I0719 19:40:15.900657   68536 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-578533" [54d5ed69-0572-4d75-b2c2-868a2d48f048] Running
	I0719 19:40:15.900664   68536 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-578533" [ba65a51a-7ce6-4ba9-ad87-b2763c49732f] Running
	I0719 19:40:15.900669   68536 system_pods.go:61] "kube-proxy-xd6jl" [699a18fd-c6e6-4c85-9799-0630bb7932b9] Running
	I0719 19:40:15.900675   68536 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-578533" [28fe9666-6e36-479a-9908-0ed40fbe9655] Running
	I0719 19:40:15.900687   68536 system_pods.go:61] "metrics-server-569cc877fc-kz2bz" [dc109926-e072-47e1-944c-a9614741fdaa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:40:15.900701   68536 system_pods.go:61] "storage-provisioner" [732e14b5-3120-491f-be0f-2d2ffcd7a799] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 19:40:15.900711   68536 system_pods.go:74] duration metric: took 138.04766ms to wait for pod list to return data ...
	I0719 19:40:15.900726   68536 default_sa.go:34] waiting for default service account to be created ...
	I0719 19:40:16.094864   68536 default_sa.go:45] found service account: "default"
	I0719 19:40:16.094889   68536 default_sa.go:55] duration metric: took 194.152947ms for default service account to be created ...
	I0719 19:40:16.094899   68536 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 19:40:16.297477   68536 system_pods.go:86] 9 kube-system pods found
	I0719 19:40:16.297508   68536 system_pods.go:89] "coredns-7db6d8ff4d-6qndw" [2b80cbbf-e1de-449d-affb-90c7e626e11a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 19:40:16.297515   68536 system_pods.go:89] "coredns-7db6d8ff4d-gmljj" [e3c23a75-7c98-4704-b159-465a46e71deb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 19:40:16.297521   68536 system_pods.go:89] "etcd-default-k8s-diff-port-578533" [2667fb3b-07c7-4feb-9c47-aec61d610319] Running
	I0719 19:40:16.297526   68536 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-578533" [54d5ed69-0572-4d75-b2c2-868a2d48f048] Running
	I0719 19:40:16.297531   68536 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-578533" [ba65a51a-7ce6-4ba9-ad87-b2763c49732f] Running
	I0719 19:40:16.297535   68536 system_pods.go:89] "kube-proxy-xd6jl" [699a18fd-c6e6-4c85-9799-0630bb7932b9] Running
	I0719 19:40:16.297539   68536 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-578533" [28fe9666-6e36-479a-9908-0ed40fbe9655] Running
	I0719 19:40:16.297543   68536 system_pods.go:89] "metrics-server-569cc877fc-kz2bz" [dc109926-e072-47e1-944c-a9614741fdaa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:40:16.297550   68536 system_pods.go:89] "storage-provisioner" [732e14b5-3120-491f-be0f-2d2ffcd7a799] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 19:40:16.297572   68536 retry.go:31] will retry after 294.572454ms: missing components: kube-dns
	I0719 19:40:16.599274   68536 system_pods.go:86] 9 kube-system pods found
	I0719 19:40:16.599306   68536 system_pods.go:89] "coredns-7db6d8ff4d-6qndw" [2b80cbbf-e1de-449d-affb-90c7e626e11a] Running
	I0719 19:40:16.599315   68536 system_pods.go:89] "coredns-7db6d8ff4d-gmljj" [e3c23a75-7c98-4704-b159-465a46e71deb] Running
	I0719 19:40:16.599343   68536 system_pods.go:89] "etcd-default-k8s-diff-port-578533" [2667fb3b-07c7-4feb-9c47-aec61d610319] Running
	I0719 19:40:16.599351   68536 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-578533" [54d5ed69-0572-4d75-b2c2-868a2d48f048] Running
	I0719 19:40:16.599358   68536 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-578533" [ba65a51a-7ce6-4ba9-ad87-b2763c49732f] Running
	I0719 19:40:16.599365   68536 system_pods.go:89] "kube-proxy-xd6jl" [699a18fd-c6e6-4c85-9799-0630bb7932b9] Running
	I0719 19:40:16.599371   68536 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-578533" [28fe9666-6e36-479a-9908-0ed40fbe9655] Running
	I0719 19:40:16.599386   68536 system_pods.go:89] "metrics-server-569cc877fc-kz2bz" [dc109926-e072-47e1-944c-a9614741fdaa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:40:16.599392   68536 system_pods.go:89] "storage-provisioner" [732e14b5-3120-491f-be0f-2d2ffcd7a799] Running
	I0719 19:40:16.599408   68536 system_pods.go:126] duration metric: took 504.501449ms to wait for k8s-apps to be running ...
	I0719 19:40:16.599418   68536 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 19:40:16.599461   68536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:40:16.613644   68536 system_svc.go:56] duration metric: took 14.216133ms WaitForService to wait for kubelet
	I0719 19:40:16.613673   68536 kubeadm.go:582] duration metric: took 2.73876141s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 19:40:16.613691   68536 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:40:16.695470   68536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:40:16.695494   68536 node_conditions.go:123] node cpu capacity is 2
	I0719 19:40:16.695503   68536 node_conditions.go:105] duration metric: took 81.807505ms to run NodePressure ...
	I0719 19:40:16.695513   68536 start.go:241] waiting for startup goroutines ...
	I0719 19:40:16.695523   68536 start.go:246] waiting for cluster config update ...
	I0719 19:40:16.695533   68536 start.go:255] writing updated cluster config ...
	I0719 19:40:16.695817   68536 ssh_runner.go:195] Run: rm -f paused
	I0719 19:40:16.743733   68536 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 19:40:16.746063   68536 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-578533" cluster and "default" namespace by default
	I0719 19:40:17.805890   68019 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.059736517s)
	I0719 19:40:17.805973   68019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:40:17.821097   68019 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:40:17.831114   68019 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:40:17.840896   68019 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:40:17.840926   68019 kubeadm.go:157] found existing configuration files:
	
	I0719 19:40:17.840974   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:40:17.850003   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:40:17.850059   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:40:17.859304   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:40:17.868663   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:40:17.868711   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:40:17.877527   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:40:17.886554   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:40:17.886608   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:40:17.895713   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:40:17.904287   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:40:17.904352   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:40:17.913115   68019 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 19:40:17.956895   68019 kubeadm.go:310] W0719 19:40:17.928980    2926 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0719 19:40:17.958338   68019 kubeadm.go:310] W0719 19:40:17.930488    2926 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0719 19:40:18.073705   68019 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 19:40:26.630439   68019 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0719 19:40:26.630525   68019 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 19:40:26.630649   68019 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 19:40:26.630765   68019 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 19:40:26.630913   68019 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0719 19:40:26.631023   68019 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 19:40:26.632660   68019 out.go:204]   - Generating certificates and keys ...
	I0719 19:40:26.632745   68019 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 19:40:26.632833   68019 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 19:40:26.632906   68019 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 19:40:26.632975   68019 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 19:40:26.633093   68019 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 19:40:26.633184   68019 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 19:40:26.633270   68019 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 19:40:26.633378   68019 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 19:40:26.633474   68019 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 19:40:26.633572   68019 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 19:40:26.633627   68019 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 19:40:26.633705   68019 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 19:40:26.633780   68019 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 19:40:26.633868   68019 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 19:40:26.633942   68019 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 19:40:26.634024   68019 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 19:40:26.634119   68019 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 19:40:26.634250   68019 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 19:40:26.634354   68019 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 19:40:26.635957   68019 out.go:204]   - Booting up control plane ...
	I0719 19:40:26.636062   68019 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 19:40:26.636163   68019 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 19:40:26.636259   68019 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 19:40:26.636381   68019 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 19:40:26.636505   68019 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 19:40:26.636570   68019 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 19:40:26.636727   68019 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 19:40:26.636813   68019 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 19:40:26.636888   68019 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002206871s
	I0719 19:40:26.637000   68019 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 19:40:26.637088   68019 kubeadm.go:310] [api-check] The API server is healthy after 5.002825608s
	I0719 19:40:26.637239   68019 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 19:40:26.637387   68019 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 19:40:26.637470   68019 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 19:40:26.637681   68019 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-971041 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 19:40:26.637737   68019 kubeadm.go:310] [bootstrap-token] Using token: r7dsc0.erulzfzieqtgxnlo
	I0719 19:40:26.638982   68019 out.go:204]   - Configuring RBAC rules ...
	I0719 19:40:26.639079   68019 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 19:40:26.639165   68019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 19:40:26.639306   68019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 19:40:26.639427   68019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 19:40:26.639522   68019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 19:40:26.639599   68019 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 19:40:26.639692   68019 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 19:40:26.639731   68019 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 19:40:26.639770   68019 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 19:40:26.639779   68019 kubeadm.go:310] 
	I0719 19:40:26.639859   68019 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 19:40:26.639868   68019 kubeadm.go:310] 
	I0719 19:40:26.639935   68019 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 19:40:26.639941   68019 kubeadm.go:310] 
	I0719 19:40:26.639974   68019 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 19:40:26.640070   68019 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 19:40:26.640145   68019 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 19:40:26.640161   68019 kubeadm.go:310] 
	I0719 19:40:26.640242   68019 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 19:40:26.640249   68019 kubeadm.go:310] 
	I0719 19:40:26.640288   68019 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 19:40:26.640303   68019 kubeadm.go:310] 
	I0719 19:40:26.640386   68019 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 19:40:26.640476   68019 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 19:40:26.640572   68019 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 19:40:26.640582   68019 kubeadm.go:310] 
	I0719 19:40:26.640655   68019 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 19:40:26.640733   68019 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 19:40:26.640745   68019 kubeadm.go:310] 
	I0719 19:40:26.640827   68019 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token r7dsc0.erulzfzieqtgxnlo \
	I0719 19:40:26.640965   68019 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 \
	I0719 19:40:26.641007   68019 kubeadm.go:310] 	--control-plane 
	I0719 19:40:26.641015   68019 kubeadm.go:310] 
	I0719 19:40:26.641105   68019 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 19:40:26.641117   68019 kubeadm.go:310] 
	I0719 19:40:26.641187   68019 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token r7dsc0.erulzfzieqtgxnlo \
	I0719 19:40:26.641282   68019 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 
	I0719 19:40:26.641293   68019 cni.go:84] Creating CNI manager for ""
	I0719 19:40:26.641302   68019 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:40:26.642788   68019 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 19:40:26.643983   68019 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 19:40:26.655231   68019 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 19:40:26.673487   68019 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 19:40:26.673555   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:26.673573   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-971041 minikube.k8s.io/updated_at=2024_07_19T19_40_26_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221 minikube.k8s.io/name=no-preload-971041 minikube.k8s.io/primary=true
	I0719 19:40:26.702916   68019 ops.go:34] apiserver oom_adj: -16
	I0719 19:40:26.871642   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:27.372636   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:27.871695   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:28.372395   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:28.871975   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:29.371869   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:29.872364   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:30.372618   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:30.871974   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:31.372607   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:31.484697   68019 kubeadm.go:1113] duration metric: took 4.811203988s to wait for elevateKubeSystemPrivileges
	I0719 19:40:31.484735   68019 kubeadm.go:394] duration metric: took 4m59.38336524s to StartCluster
	I0719 19:40:31.484758   68019 settings.go:142] acquiring lock: {Name:mkcf95271f2ac91a55c2f4ae0f8a0ccaa24777be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:40:31.484852   68019 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:40:31.487571   68019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/kubeconfig: {Name:mk0bbb5f0de0e816a151363ad4427bbf9de52f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:40:31.487903   68019 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.119 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 19:40:31.488046   68019 config.go:182] Loaded profile config "no-preload-971041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 19:40:31.488017   68019 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 19:40:31.488091   68019 addons.go:69] Setting storage-provisioner=true in profile "no-preload-971041"
	I0719 19:40:31.488105   68019 addons.go:69] Setting default-storageclass=true in profile "no-preload-971041"
	I0719 19:40:31.488136   68019 addons.go:234] Setting addon storage-provisioner=true in "no-preload-971041"
	I0719 19:40:31.488136   68019 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-971041"
	W0719 19:40:31.488145   68019 addons.go:243] addon storage-provisioner should already be in state true
	I0719 19:40:31.488173   68019 host.go:66] Checking if "no-preload-971041" exists ...
	I0719 19:40:31.488097   68019 addons.go:69] Setting metrics-server=true in profile "no-preload-971041"
	I0719 19:40:31.488220   68019 addons.go:234] Setting addon metrics-server=true in "no-preload-971041"
	W0719 19:40:31.488229   68019 addons.go:243] addon metrics-server should already be in state true
	I0719 19:40:31.488253   68019 host.go:66] Checking if "no-preload-971041" exists ...
	I0719 19:40:31.488560   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.488566   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.488568   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.488588   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.488589   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.488590   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.489333   68019 out.go:177] * Verifying Kubernetes components...
	I0719 19:40:31.490709   68019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:40:31.505073   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35949
	I0719 19:40:31.505121   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36213
	I0719 19:40:31.505243   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34987
	I0719 19:40:31.505554   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.505575   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.505651   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.506136   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.506161   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.506138   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.506177   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.506369   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.506390   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.506521   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.506582   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.506762   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.506950   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetState
	I0719 19:40:31.507074   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.507101   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.507097   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.507134   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.510086   68019 addons.go:234] Setting addon default-storageclass=true in "no-preload-971041"
	W0719 19:40:31.510104   68019 addons.go:243] addon default-storageclass should already be in state true
	I0719 19:40:31.510154   68019 host.go:66] Checking if "no-preload-971041" exists ...
	I0719 19:40:31.510697   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.510727   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.523120   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36047
	I0719 19:40:31.523463   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45285
	I0719 19:40:31.523899   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.523950   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.524463   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.524479   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.524580   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.524590   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.524931   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.524973   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.525131   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetState
	I0719 19:40:31.525164   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetState
	I0719 19:40:31.526929   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:40:31.527498   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:40:31.527764   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42975
	I0719 19:40:31.528531   68019 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0719 19:40:31.528562   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.529323   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.529340   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.529389   68019 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:40:28.617451   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:40:28.617640   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:40:31.529719   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.530357   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.530369   68019 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 19:40:31.530383   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.530386   68019 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 19:40:31.530405   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:40:31.531478   68019 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:40:31.531494   68019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 19:40:31.531509   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:40:31.533843   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:40:31.534251   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:40:31.534279   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:40:31.534542   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:40:31.534701   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:40:31.534827   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:40:31.534968   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:40:31.538822   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:40:31.539296   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:40:31.539468   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:40:31.539598   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:40:31.539745   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:40:31.539894   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:40:31.540051   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:40:31.547930   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40325
	I0719 19:40:31.548415   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.548833   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.548852   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.549151   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.549309   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetState
	I0719 19:40:31.550633   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:40:31.550855   68019 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 19:40:31.550869   68019 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 19:40:31.550881   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:40:31.553528   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:40:31.554109   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:40:31.554134   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:40:31.554298   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:40:31.554650   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:40:31.554842   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:40:31.554962   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:40:31.716869   68019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:40:31.740991   68019 node_ready.go:35] waiting up to 6m0s for node "no-preload-971041" to be "Ready" ...
	I0719 19:40:31.749672   68019 node_ready.go:49] node "no-preload-971041" has status "Ready":"True"
	I0719 19:40:31.749700   68019 node_ready.go:38] duration metric: took 8.681441ms for node "no-preload-971041" to be "Ready" ...
	I0719 19:40:31.749713   68019 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:40:31.755037   68019 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-7kdqd" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:31.865867   68019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:40:31.866793   68019 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 19:40:31.866807   68019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0719 19:40:31.876604   68019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 19:40:31.890924   68019 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 19:40:31.890947   68019 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 19:40:31.930299   68019 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 19:40:31.930326   68019 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 19:40:31.982650   68019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 19:40:32.488961   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:32.488991   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:32.489025   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:32.489051   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:32.489355   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:32.489372   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:32.489382   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:32.489390   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:32.489398   68019 main.go:141] libmachine: (no-preload-971041) DBG | Closing plugin on server side
	I0719 19:40:32.489424   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:32.489431   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:32.489439   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:32.489446   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:32.489619   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:32.489639   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:32.489676   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:32.489689   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:32.489690   68019 main.go:141] libmachine: (no-preload-971041) DBG | Closing plugin on server side
	I0719 19:40:32.506595   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:32.506613   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:32.506974   68019 main.go:141] libmachine: (no-preload-971041) DBG | Closing plugin on server side
	I0719 19:40:32.506985   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:32.507001   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:33.019915   68019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.037215511s)
	I0719 19:40:33.019974   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:33.019991   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:33.020393   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:33.020409   68019 main.go:141] libmachine: (no-preload-971041) DBG | Closing plugin on server side
	I0719 19:40:33.020415   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:33.020433   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:33.020442   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:33.020806   68019 main.go:141] libmachine: (no-preload-971041) DBG | Closing plugin on server side
	I0719 19:40:33.020860   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:33.020875   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:33.020893   68019 addons.go:475] Verifying addon metrics-server=true in "no-preload-971041"
	I0719 19:40:33.023369   68019 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0719 19:40:33.024402   68019 addons.go:510] duration metric: took 1.536389874s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0719 19:40:33.763711   68019 pod_ready.go:102] pod "coredns-5cfdc65f69-7kdqd" in "kube-system" namespace has status "Ready":"False"
	I0719 19:40:36.262760   68019 pod_ready.go:102] pod "coredns-5cfdc65f69-7kdqd" in "kube-system" namespace has status "Ready":"False"
	I0719 19:40:38.761973   68019 pod_ready.go:102] pod "coredns-5cfdc65f69-7kdqd" in "kube-system" namespace has status "Ready":"False"
	I0719 19:40:39.261665   68019 pod_ready.go:92] pod "coredns-5cfdc65f69-7kdqd" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:39.261686   68019 pod_ready.go:81] duration metric: took 7.506626451s for pod "coredns-5cfdc65f69-7kdqd" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.261696   68019 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-d5xr6" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.268952   68019 pod_ready.go:92] pod "coredns-5cfdc65f69-d5xr6" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:39.268968   68019 pod_ready.go:81] duration metric: took 7.267035ms for pod "coredns-5cfdc65f69-d5xr6" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.268976   68019 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.274245   68019 pod_ready.go:92] pod "etcd-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:39.274263   68019 pod_ready.go:81] duration metric: took 5.279919ms for pod "etcd-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.274273   68019 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.278452   68019 pod_ready.go:92] pod "kube-apiserver-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:39.278469   68019 pod_ready.go:81] duration metric: took 4.189107ms for pod "kube-apiserver-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.278479   68019 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:40.285769   68019 pod_ready.go:92] pod "kube-controller-manager-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:40.285797   68019 pod_ready.go:81] duration metric: took 1.007309472s for pod "kube-controller-manager-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:40.285814   68019 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-445wh" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:40.459001   68019 pod_ready.go:92] pod "kube-proxy-445wh" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:40.459024   68019 pod_ready.go:81] duration metric: took 173.203584ms for pod "kube-proxy-445wh" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:40.459033   68019 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:40.858861   68019 pod_ready.go:92] pod "kube-scheduler-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:40.858890   68019 pod_ready.go:81] duration metric: took 399.849812ms for pod "kube-scheduler-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:40.858901   68019 pod_ready.go:38] duration metric: took 9.109176659s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:40:40.858919   68019 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:40:40.858980   68019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:40:40.873763   68019 api_server.go:72] duration metric: took 9.385790935s to wait for apiserver process to appear ...
	I0719 19:40:40.873791   68019 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:40:40.873813   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:40:40.878709   68019 api_server.go:279] https://192.168.50.119:8443/healthz returned 200:
	ok
	I0719 19:40:40.879656   68019 api_server.go:141] control plane version: v1.31.0-beta.0
	I0719 19:40:40.879677   68019 api_server.go:131] duration metric: took 5.8793ms to wait for apiserver health ...
	I0719 19:40:40.879686   68019 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:40:41.062525   68019 system_pods.go:59] 9 kube-system pods found
	I0719 19:40:41.062553   68019 system_pods.go:61] "coredns-5cfdc65f69-7kdqd" [1b0eeec5-f993-4d90-a3e0-5254bcc39f64] Running
	I0719 19:40:41.062561   68019 system_pods.go:61] "coredns-5cfdc65f69-d5xr6" [5f5a96c3-84ef-429c-b31d-472fcc247862] Running
	I0719 19:40:41.062565   68019 system_pods.go:61] "etcd-no-preload-971041" [8852dc62-d441-43c0-97e5-6fb333ee3d26] Running
	I0719 19:40:41.062569   68019 system_pods.go:61] "kube-apiserver-no-preload-971041" [083b50e1-e9fa-4e4b-a684-a6d461c7b45e] Running
	I0719 19:40:41.062573   68019 system_pods.go:61] "kube-controller-manager-no-preload-971041" [45ee5801-3b8a-4072-a4c6-716525a06d87] Running
	I0719 19:40:41.062576   68019 system_pods.go:61] "kube-proxy-445wh" [468d052a-a8d2-47f6-9054-813dd3ee33ee] Running
	I0719 19:40:41.062579   68019 system_pods.go:61] "kube-scheduler-no-preload-971041" [3989f412-bfc8-41c2-a9df-ef2b72f95f37] Running
	I0719 19:40:41.062584   68019 system_pods.go:61] "metrics-server-78fcd8795b-r6hdb" [13d26205-03d4-4719-bfa0-199a2e23b52a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:40:41.062588   68019 system_pods.go:61] "storage-provisioner" [fedb9d1a-139b-4627-8efa-64dd1481d8d4] Running
	I0719 19:40:41.062595   68019 system_pods.go:74] duration metric: took 182.903201ms to wait for pod list to return data ...
	I0719 19:40:41.062601   68019 default_sa.go:34] waiting for default service account to be created ...
	I0719 19:40:41.259408   68019 default_sa.go:45] found service account: "default"
	I0719 19:40:41.259432   68019 default_sa.go:55] duration metric: took 196.824864ms for default service account to be created ...
	I0719 19:40:41.259440   68019 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 19:40:41.462209   68019 system_pods.go:86] 9 kube-system pods found
	I0719 19:40:41.462243   68019 system_pods.go:89] "coredns-5cfdc65f69-7kdqd" [1b0eeec5-f993-4d90-a3e0-5254bcc39f64] Running
	I0719 19:40:41.462251   68019 system_pods.go:89] "coredns-5cfdc65f69-d5xr6" [5f5a96c3-84ef-429c-b31d-472fcc247862] Running
	I0719 19:40:41.462258   68019 system_pods.go:89] "etcd-no-preload-971041" [8852dc62-d441-43c0-97e5-6fb333ee3d26] Running
	I0719 19:40:41.462264   68019 system_pods.go:89] "kube-apiserver-no-preload-971041" [083b50e1-e9fa-4e4b-a684-a6d461c7b45e] Running
	I0719 19:40:41.462271   68019 system_pods.go:89] "kube-controller-manager-no-preload-971041" [45ee5801-3b8a-4072-a4c6-716525a06d87] Running
	I0719 19:40:41.462276   68019 system_pods.go:89] "kube-proxy-445wh" [468d052a-a8d2-47f6-9054-813dd3ee33ee] Running
	I0719 19:40:41.462282   68019 system_pods.go:89] "kube-scheduler-no-preload-971041" [3989f412-bfc8-41c2-a9df-ef2b72f95f37] Running
	I0719 19:40:41.462291   68019 system_pods.go:89] "metrics-server-78fcd8795b-r6hdb" [13d26205-03d4-4719-bfa0-199a2e23b52a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:40:41.462295   68019 system_pods.go:89] "storage-provisioner" [fedb9d1a-139b-4627-8efa-64dd1481d8d4] Running
	I0719 19:40:41.462304   68019 system_pods.go:126] duration metric: took 202.857782ms to wait for k8s-apps to be running ...
	I0719 19:40:41.462312   68019 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 19:40:41.462377   68019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:40:41.478453   68019 system_svc.go:56] duration metric: took 16.131704ms WaitForService to wait for kubelet
	I0719 19:40:41.478487   68019 kubeadm.go:582] duration metric: took 9.990516808s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 19:40:41.478510   68019 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:40:41.660126   68019 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:40:41.660154   68019 node_conditions.go:123] node cpu capacity is 2
	I0719 19:40:41.660165   68019 node_conditions.go:105] duration metric: took 181.649686ms to run NodePressure ...
	I0719 19:40:41.660176   68019 start.go:241] waiting for startup goroutines ...
	I0719 19:40:41.660182   68019 start.go:246] waiting for cluster config update ...
	I0719 19:40:41.660192   68019 start.go:255] writing updated cluster config ...
	I0719 19:40:41.660455   68019 ssh_runner.go:195] Run: rm -f paused
	I0719 19:40:41.707917   68019 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0719 19:40:41.710102   68019 out.go:177] * Done! kubectl is now configured to use "no-preload-971041" cluster and "default" namespace by default
	I0719 19:41:08.619807   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:41:08.620131   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:41:08.620145   68995 kubeadm.go:310] 
	I0719 19:41:08.620196   68995 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0719 19:41:08.620252   68995 kubeadm.go:310] 		timed out waiting for the condition
	I0719 19:41:08.620262   68995 kubeadm.go:310] 
	I0719 19:41:08.620302   68995 kubeadm.go:310] 	This error is likely caused by:
	I0719 19:41:08.620401   68995 kubeadm.go:310] 		- The kubelet is not running
	I0719 19:41:08.620615   68995 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0719 19:41:08.620632   68995 kubeadm.go:310] 
	I0719 19:41:08.620765   68995 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0719 19:41:08.620824   68995 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0719 19:41:08.620872   68995 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0719 19:41:08.620882   68995 kubeadm.go:310] 
	I0719 19:41:08.621033   68995 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0719 19:41:08.621168   68995 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0719 19:41:08.621179   68995 kubeadm.go:310] 
	I0719 19:41:08.621349   68995 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0719 19:41:08.621469   68995 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0719 19:41:08.621576   68995 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0719 19:41:08.621664   68995 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0719 19:41:08.621677   68995 kubeadm.go:310] 
	I0719 19:41:08.622038   68995 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 19:41:08.622184   68995 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0719 19:41:08.622310   68995 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0719 19:41:08.622452   68995 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0719 19:41:08.622521   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 19:41:09.080325   68995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:41:09.095325   68995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:41:09.104753   68995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:41:09.104771   68995 kubeadm.go:157] found existing configuration files:
	
	I0719 19:41:09.104818   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:41:09.113315   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:41:09.113375   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:41:09.122025   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:41:09.130333   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:41:09.130390   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:41:09.139159   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:41:09.147867   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:41:09.147913   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:41:09.156426   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:41:09.165249   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:41:09.165299   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:41:09.174271   68995 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 19:41:09.379598   68995 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 19:43:05.534759   68995 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0719 19:43:05.534861   68995 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0719 19:43:05.536588   68995 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0719 19:43:05.536645   68995 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 19:43:05.536728   68995 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 19:43:05.536857   68995 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 19:43:05.536952   68995 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 19:43:05.537043   68995 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 19:43:05.538861   68995 out.go:204]   - Generating certificates and keys ...
	I0719 19:43:05.538966   68995 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 19:43:05.539081   68995 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 19:43:05.539168   68995 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 19:43:05.539245   68995 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 19:43:05.539349   68995 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 19:43:05.539395   68995 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 19:43:05.539453   68995 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 19:43:05.539504   68995 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 19:43:05.539578   68995 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 19:43:05.539668   68995 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 19:43:05.539716   68995 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 19:43:05.539763   68995 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 19:43:05.539812   68995 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 19:43:05.539885   68995 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 19:43:05.539968   68995 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 19:43:05.540022   68995 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 19:43:05.540135   68995 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 19:43:05.540239   68995 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 19:43:05.540291   68995 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 19:43:05.540376   68995 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 19:43:05.542005   68995 out.go:204]   - Booting up control plane ...
	I0719 19:43:05.542089   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 19:43:05.542159   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 19:43:05.542219   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 19:43:05.542290   68995 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 19:43:05.542442   68995 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 19:43:05.542510   68995 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0719 19:43:05.542610   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:43:05.542826   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:43:05.542939   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:43:05.543171   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:43:05.543252   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:43:05.543470   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:43:05.543578   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:43:05.543759   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:43:05.543891   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:43:05.544130   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:43:05.544144   68995 kubeadm.go:310] 
	I0719 19:43:05.544185   68995 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0719 19:43:05.544237   68995 kubeadm.go:310] 		timed out waiting for the condition
	I0719 19:43:05.544252   68995 kubeadm.go:310] 
	I0719 19:43:05.544286   68995 kubeadm.go:310] 	This error is likely caused by:
	I0719 19:43:05.544316   68995 kubeadm.go:310] 		- The kubelet is not running
	I0719 19:43:05.544416   68995 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0719 19:43:05.544423   68995 kubeadm.go:310] 
	I0719 19:43:05.544574   68995 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0719 19:43:05.544628   68995 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0719 19:43:05.544675   68995 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0719 19:43:05.544683   68995 kubeadm.go:310] 
	I0719 19:43:05.544816   68995 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0719 19:43:05.544917   68995 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0719 19:43:05.544926   68995 kubeadm.go:310] 
	I0719 19:43:05.545025   68995 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0719 19:43:05.545099   68995 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0719 19:43:05.545183   68995 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0719 19:43:05.545275   68995 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0719 19:43:05.545312   68995 kubeadm.go:310] 
	I0719 19:43:05.545346   68995 kubeadm.go:394] duration metric: took 8m2.050412966s to StartCluster
	I0719 19:43:05.545390   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:43:05.545442   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:43:05.585066   68995 cri.go:89] found id: ""
	I0719 19:43:05.585101   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.585114   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:43:05.585123   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:43:05.585197   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:43:05.619270   68995 cri.go:89] found id: ""
	I0719 19:43:05.619298   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.619308   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:43:05.619316   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:43:05.619378   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:43:05.652886   68995 cri.go:89] found id: ""
	I0719 19:43:05.652924   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.652935   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:43:05.652943   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:43:05.653004   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:43:05.687430   68995 cri.go:89] found id: ""
	I0719 19:43:05.687464   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.687475   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:43:05.687482   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:43:05.687542   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:43:05.724176   68995 cri.go:89] found id: ""
	I0719 19:43:05.724207   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.724218   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:43:05.724226   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:43:05.724286   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:43:05.757047   68995 cri.go:89] found id: ""
	I0719 19:43:05.757070   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.757077   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:43:05.757083   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:43:05.757131   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:43:05.799705   68995 cri.go:89] found id: ""
	I0719 19:43:05.799733   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.799743   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:43:05.799751   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:43:05.799811   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:43:05.844622   68995 cri.go:89] found id: ""
	I0719 19:43:05.844647   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.844655   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:43:05.844665   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:43:05.844676   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:43:05.962843   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:43:05.962880   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:43:06.010157   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:43:06.010191   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:43:06.060738   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:43:06.060771   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:43:06.075609   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:43:06.075650   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:43:06.155814   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0719 19:43:06.155872   68995 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0719 19:43:06.155906   68995 out.go:239] * 
	W0719 19:43:06.155980   68995 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0719 19:43:06.156013   68995 out.go:239] * 
	W0719 19:43:06.156982   68995 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 19:43:06.160611   68995 out.go:177] 
	W0719 19:43:06.161797   68995 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0719 19:43:06.161854   68995 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0719 19:43:06.161873   68995 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0719 19:43:06.163354   68995 out.go:177] 
	
	
	==> CRI-O <==
	Jul 19 19:48:14 embed-certs-889918 crio[720]: time="2024-07-19 19:48:14.545068640Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721418494545043806,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1b9bd4b2-ec7c-429b-8546-adf5451ff22a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:48:14 embed-certs-889918 crio[720]: time="2024-07-19 19:48:14.545883546Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f3705f87-0a26-4c16-994f-c16f0af75f9e name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:48:14 embed-certs-889918 crio[720]: time="2024-07-19 19:48:14.545986885Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f3705f87-0a26-4c16-994f-c16f0af75f9e name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:48:14 embed-certs-889918 crio[720]: time="2024-07-19 19:48:14.546177313Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040,PodSandboxId:f3324641df8191a240ebdb136f7ca9f8d63966cb2d71505fa977c737615a172b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721417718151404390,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32274055-6a2d-4cf9-8ae8-3c5f7cdc66db,},Annotations:map[string]string{io.kubernetes.container.hash: 60d8078a,io.kubernetes.container.restartCount: 3,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e,PodSandboxId:4f9dbafba502eb72b369dbeef22b0e8adf20456360b59933e5b5c98ee93ccd39,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721417702835392362,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-889918,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d768a85f632b7b19b77fd161977ec72,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc7a3241c865128c76555619cdc8b1c2c5eef8fac386a32119b18be40113f9bd,PodSandboxId:f9fcc8073d659d4d38b14afa81c811af365ff04c21f792bd3feb48283e65a679,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721417697929057047,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9dc201e7-6e82-4a8b-8272-038963f23b13,},Annotations:map[string]string{io.kubernetes.container.hash: fd43310a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb,PodSandboxId:d8745d8bc0585e82361a075bb4fcaa7a7ab76e24636a9f2542a157eb2b2544bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721417695006176796,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7h48w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd691a13-e571-4148-bef0-ba0552277312,},Annotations:map[string]string{io.kubernetes.container.hash: 156afc3a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e,PodSandboxId:dad11fc8c21171582ffbc5afa86b0c912a0f9d1c21f3deed3dd4e1d514288c14,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721417687345596070,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5fcgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67ef345b-f2ef-4e18-a55c-91e2c08a
37b8,},Annotations:map[string]string{io.kubernetes.container.hash: 61970ca5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91,PodSandboxId:f3324641df8191a240ebdb136f7ca9f8d63966cb2d71505fa977c737615a172b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721417687292746960,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32274055-6a2d-4cf9-8ae8-3c5f7cdc66db,},Annota
tions:map[string]string{io.kubernetes.container.hash: 60d8078a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c,PodSandboxId:380d6b10300658327e0245a9de2140bc45e48f39f48c181237a4f7c8dec7a028,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721417683027506333,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-889918,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3753ac9e0fe774872de78b19b61585b,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 768f0e28,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5,PodSandboxId:0bbaa257463ac942bb33a6a56689e4aa343b916154da115561bd923195238f6a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721417683018442852,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-889918,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdb9af376988ab1bc110fa346242eb5f,},Annotations:map[string]string{io.kubernetes.container.hash:
3c1e0263,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498,PodSandboxId:ae68220a155c45447d99f956f9aa4b9ae7f83a337cc541216e69b08fc0a68220,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721417683012400479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-889918,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8be6b19f7bf77976d0844cedb8035b7,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f3705f87-0a26-4c16-994f-c16f0af75f9e name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:48:14 embed-certs-889918 crio[720]: time="2024-07-19 19:48:14.581595584Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=11e40ca8-c135-4a57-8306-db99fcb75f3b name=/runtime.v1.RuntimeService/Version
	Jul 19 19:48:14 embed-certs-889918 crio[720]: time="2024-07-19 19:48:14.581702003Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=11e40ca8-c135-4a57-8306-db99fcb75f3b name=/runtime.v1.RuntimeService/Version
	Jul 19 19:48:14 embed-certs-889918 crio[720]: time="2024-07-19 19:48:14.582883512Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c4fb67cb-a604-4c2c-96d2-0b7b1cdd54eb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:48:14 embed-certs-889918 crio[720]: time="2024-07-19 19:48:14.583357400Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721418494583331109,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c4fb67cb-a604-4c2c-96d2-0b7b1cdd54eb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:48:14 embed-certs-889918 crio[720]: time="2024-07-19 19:48:14.583778650Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=04d4ab07-a6be-418f-ba26-a164b6db80fa name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:48:14 embed-certs-889918 crio[720]: time="2024-07-19 19:48:14.583831822Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=04d4ab07-a6be-418f-ba26-a164b6db80fa name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:48:14 embed-certs-889918 crio[720]: time="2024-07-19 19:48:14.584067321Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040,PodSandboxId:f3324641df8191a240ebdb136f7ca9f8d63966cb2d71505fa977c737615a172b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721417718151404390,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32274055-6a2d-4cf9-8ae8-3c5f7cdc66db,},Annotations:map[string]string{io.kubernetes.container.hash: 60d8078a,io.kubernetes.container.restartCount: 3,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e,PodSandboxId:4f9dbafba502eb72b369dbeef22b0e8adf20456360b59933e5b5c98ee93ccd39,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721417702835392362,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-889918,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d768a85f632b7b19b77fd161977ec72,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc7a3241c865128c76555619cdc8b1c2c5eef8fac386a32119b18be40113f9bd,PodSandboxId:f9fcc8073d659d4d38b14afa81c811af365ff04c21f792bd3feb48283e65a679,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721417697929057047,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9dc201e7-6e82-4a8b-8272-038963f23b13,},Annotations:map[string]string{io.kubernetes.container.hash: fd43310a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb,PodSandboxId:d8745d8bc0585e82361a075bb4fcaa7a7ab76e24636a9f2542a157eb2b2544bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721417695006176796,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7h48w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd691a13-e571-4148-bef0-ba0552277312,},Annotations:map[string]string{io.kubernetes.container.hash: 156afc3a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e,PodSandboxId:dad11fc8c21171582ffbc5afa86b0c912a0f9d1c21f3deed3dd4e1d514288c14,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721417687345596070,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5fcgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67ef345b-f2ef-4e18-a55c-91e2c08a
37b8,},Annotations:map[string]string{io.kubernetes.container.hash: 61970ca5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91,PodSandboxId:f3324641df8191a240ebdb136f7ca9f8d63966cb2d71505fa977c737615a172b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721417687292746960,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32274055-6a2d-4cf9-8ae8-3c5f7cdc66db,},Annota
tions:map[string]string{io.kubernetes.container.hash: 60d8078a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c,PodSandboxId:380d6b10300658327e0245a9de2140bc45e48f39f48c181237a4f7c8dec7a028,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721417683027506333,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-889918,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3753ac9e0fe774872de78b19b61585b,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 768f0e28,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5,PodSandboxId:0bbaa257463ac942bb33a6a56689e4aa343b916154da115561bd923195238f6a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721417683018442852,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-889918,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdb9af376988ab1bc110fa346242eb5f,},Annotations:map[string]string{io.kubernetes.container.hash:
3c1e0263,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498,PodSandboxId:ae68220a155c45447d99f956f9aa4b9ae7f83a337cc541216e69b08fc0a68220,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721417683012400479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-889918,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8be6b19f7bf77976d0844cedb8035b7,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=04d4ab07-a6be-418f-ba26-a164b6db80fa name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:48:14 embed-certs-889918 crio[720]: time="2024-07-19 19:48:14.624867775Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e1aaa8c9-e623-4267-80ee-7eb75ed14182 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:48:14 embed-certs-889918 crio[720]: time="2024-07-19 19:48:14.625156839Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e1aaa8c9-e623-4267-80ee-7eb75ed14182 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:48:14 embed-certs-889918 crio[720]: time="2024-07-19 19:48:14.626520946Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5080282f-4b96-4ee7-aa29-64be1e11f62c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:48:14 embed-certs-889918 crio[720]: time="2024-07-19 19:48:14.627042891Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721418494626902071,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5080282f-4b96-4ee7-aa29-64be1e11f62c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:48:14 embed-certs-889918 crio[720]: time="2024-07-19 19:48:14.627651220Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=72791267-9274-48ae-be05-282084db2360 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:48:14 embed-certs-889918 crio[720]: time="2024-07-19 19:48:14.627723177Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=72791267-9274-48ae-be05-282084db2360 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:48:14 embed-certs-889918 crio[720]: time="2024-07-19 19:48:14.627985179Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040,PodSandboxId:f3324641df8191a240ebdb136f7ca9f8d63966cb2d71505fa977c737615a172b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721417718151404390,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32274055-6a2d-4cf9-8ae8-3c5f7cdc66db,},Annotations:map[string]string{io.kubernetes.container.hash: 60d8078a,io.kubernetes.container.restartCount: 3,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e,PodSandboxId:4f9dbafba502eb72b369dbeef22b0e8adf20456360b59933e5b5c98ee93ccd39,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721417702835392362,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-889918,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d768a85f632b7b19b77fd161977ec72,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc7a3241c865128c76555619cdc8b1c2c5eef8fac386a32119b18be40113f9bd,PodSandboxId:f9fcc8073d659d4d38b14afa81c811af365ff04c21f792bd3feb48283e65a679,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721417697929057047,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9dc201e7-6e82-4a8b-8272-038963f23b13,},Annotations:map[string]string{io.kubernetes.container.hash: fd43310a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb,PodSandboxId:d8745d8bc0585e82361a075bb4fcaa7a7ab76e24636a9f2542a157eb2b2544bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721417695006176796,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7h48w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd691a13-e571-4148-bef0-ba0552277312,},Annotations:map[string]string{io.kubernetes.container.hash: 156afc3a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e,PodSandboxId:dad11fc8c21171582ffbc5afa86b0c912a0f9d1c21f3deed3dd4e1d514288c14,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721417687345596070,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5fcgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67ef345b-f2ef-4e18-a55c-91e2c08a
37b8,},Annotations:map[string]string{io.kubernetes.container.hash: 61970ca5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91,PodSandboxId:f3324641df8191a240ebdb136f7ca9f8d63966cb2d71505fa977c737615a172b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721417687292746960,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32274055-6a2d-4cf9-8ae8-3c5f7cdc66db,},Annota
tions:map[string]string{io.kubernetes.container.hash: 60d8078a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c,PodSandboxId:380d6b10300658327e0245a9de2140bc45e48f39f48c181237a4f7c8dec7a028,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721417683027506333,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-889918,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3753ac9e0fe774872de78b19b61585b,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 768f0e28,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5,PodSandboxId:0bbaa257463ac942bb33a6a56689e4aa343b916154da115561bd923195238f6a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721417683018442852,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-889918,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdb9af376988ab1bc110fa346242eb5f,},Annotations:map[string]string{io.kubernetes.container.hash:
3c1e0263,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498,PodSandboxId:ae68220a155c45447d99f956f9aa4b9ae7f83a337cc541216e69b08fc0a68220,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721417683012400479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-889918,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8be6b19f7bf77976d0844cedb8035b7,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=72791267-9274-48ae-be05-282084db2360 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:48:14 embed-certs-889918 crio[720]: time="2024-07-19 19:48:14.659406940Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=43c8318b-4534-40c8-89a5-d3def6bdf1fc name=/runtime.v1.RuntimeService/Version
	Jul 19 19:48:14 embed-certs-889918 crio[720]: time="2024-07-19 19:48:14.659477363Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=43c8318b-4534-40c8-89a5-d3def6bdf1fc name=/runtime.v1.RuntimeService/Version
	Jul 19 19:48:14 embed-certs-889918 crio[720]: time="2024-07-19 19:48:14.660426565Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2cdfa720-3fee-4699-85c0-590b7a41f124 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:48:14 embed-certs-889918 crio[720]: time="2024-07-19 19:48:14.660851045Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721418494660828294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2cdfa720-3fee-4699-85c0-590b7a41f124 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:48:14 embed-certs-889918 crio[720]: time="2024-07-19 19:48:14.661585240Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a61a0e70-cf07-48e2-8cb7-0e32c99eda71 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:48:14 embed-certs-889918 crio[720]: time="2024-07-19 19:48:14.661651822Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a61a0e70-cf07-48e2-8cb7-0e32c99eda71 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:48:14 embed-certs-889918 crio[720]: time="2024-07-19 19:48:14.661886644Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040,PodSandboxId:f3324641df8191a240ebdb136f7ca9f8d63966cb2d71505fa977c737615a172b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721417718151404390,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32274055-6a2d-4cf9-8ae8-3c5f7cdc66db,},Annotations:map[string]string{io.kubernetes.container.hash: 60d8078a,io.kubernetes.container.restartCount: 3,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e,PodSandboxId:4f9dbafba502eb72b369dbeef22b0e8adf20456360b59933e5b5c98ee93ccd39,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721417702835392362,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-889918,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d768a85f632b7b19b77fd161977ec72,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc7a3241c865128c76555619cdc8b1c2c5eef8fac386a32119b18be40113f9bd,PodSandboxId:f9fcc8073d659d4d38b14afa81c811af365ff04c21f792bd3feb48283e65a679,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721417697929057047,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9dc201e7-6e82-4a8b-8272-038963f23b13,},Annotations:map[string]string{io.kubernetes.container.hash: fd43310a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb,PodSandboxId:d8745d8bc0585e82361a075bb4fcaa7a7ab76e24636a9f2542a157eb2b2544bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721417695006176796,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7h48w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd691a13-e571-4148-bef0-ba0552277312,},Annotations:map[string]string{io.kubernetes.container.hash: 156afc3a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e,PodSandboxId:dad11fc8c21171582ffbc5afa86b0c912a0f9d1c21f3deed3dd4e1d514288c14,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721417687345596070,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5fcgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67ef345b-f2ef-4e18-a55c-91e2c08a
37b8,},Annotations:map[string]string{io.kubernetes.container.hash: 61970ca5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91,PodSandboxId:f3324641df8191a240ebdb136f7ca9f8d63966cb2d71505fa977c737615a172b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721417687292746960,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32274055-6a2d-4cf9-8ae8-3c5f7cdc66db,},Annota
tions:map[string]string{io.kubernetes.container.hash: 60d8078a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c,PodSandboxId:380d6b10300658327e0245a9de2140bc45e48f39f48c181237a4f7c8dec7a028,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721417683027506333,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-889918,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3753ac9e0fe774872de78b19b61585b,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 768f0e28,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5,PodSandboxId:0bbaa257463ac942bb33a6a56689e4aa343b916154da115561bd923195238f6a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721417683018442852,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-889918,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdb9af376988ab1bc110fa346242eb5f,},Annotations:map[string]string{io.kubernetes.container.hash:
3c1e0263,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498,PodSandboxId:ae68220a155c45447d99f956f9aa4b9ae7f83a337cc541216e69b08fc0a68220,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721417683012400479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-889918,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8be6b19f7bf77976d0844cedb8035b7,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a61a0e70-cf07-48e2-8cb7-0e32c99eda71 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	aec5bd5dcc9a5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       3                   f3324641df819       storage-provisioner
	560ffa4638473       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      13 minutes ago      Running             kube-scheduler            1                   4f9dbafba502e       kube-scheduler-embed-certs-889918
	dc7a3241c8651       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   f9fcc8073d659       busybox
	9c90b21a180b4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   d8745d8bc0585       coredns-7db6d8ff4d-7h48w
	4b54e8db4eb45       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      13 minutes ago      Running             kube-proxy                1                   dad11fc8c2117       kube-proxy-5fcgs
	f0710a5e9dce0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   f3324641df819       storage-provisioner
	46b13ce63930e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   380d6b1030065       etcd-embed-certs-889918
	82586fde0cf25       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      13 minutes ago      Running             kube-apiserver            1                   0bbaa257463ac       kube-apiserver-embed-certs-889918
	4e88e6527a2ba       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      13 minutes ago      Running             kube-controller-manager   1                   ae68220a155c4       kube-controller-manager-embed-certs-889918
	
	
	==> coredns [9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:55154 - 26044 "HINFO IN 4126011779228178089.2074560202026123776. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008880797s
	
	
	==> describe nodes <==
	Name:               embed-certs-889918
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-889918
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221
	                    minikube.k8s.io/name=embed-certs-889918
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T19_26_26_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 19:26:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-889918
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 19:48:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 19:45:28 +0000   Fri, 19 Jul 2024 19:26:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 19:45:28 +0000   Fri, 19 Jul 2024 19:26:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 19:45:28 +0000   Fri, 19 Jul 2024 19:26:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 19:45:28 +0000   Fri, 19 Jul 2024 19:34:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.107
	  Hostname:    embed-certs-889918
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a76992b08f504b5080d8420bec37f7ca
	  System UUID:                a76992b0-8f50-4b50-80d8-420bec37f7ca
	  Boot ID:                    28064893-1d02-4b22-ba14-280f6e8baa55
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 coredns-7db6d8ff4d-7h48w                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-embed-certs-889918                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-embed-certs-889918             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-embed-certs-889918    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-5fcgs                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-embed-certs-889918             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-569cc877fc-sfc85               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m                kubelet          Node embed-certs-889918 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node embed-certs-889918 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node embed-certs-889918 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                21m                kubelet          Node embed-certs-889918 status is now: NodeReady
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-889918 event: Registered Node embed-certs-889918 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-889918 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-889918 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-889918 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-889918 event: Registered Node embed-certs-889918 in Controller
	
	
	==> dmesg <==
	[Jul19 19:34] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.058511] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.051032] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.554846] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.805303] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.529390] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.361469] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.058177] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057030] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.193187] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.132131] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.301687] systemd-fstab-generator[703]: Ignoring "noauto" option for root device
	[  +4.273336] systemd-fstab-generator[801]: Ignoring "noauto" option for root device
	[  +0.061117] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.765459] systemd-fstab-generator[925]: Ignoring "noauto" option for root device
	[  +5.532820] kauditd_printk_skb: 87 callbacks suppressed
	[  +1.818101] systemd-fstab-generator[1473]: Ignoring "noauto" option for root device
	[  +3.945177] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.729333] kauditd_printk_skb: 43 callbacks suppressed
	[Jul19 19:35] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c] <==
	{"level":"warn","ts":"2024-07-19T19:35:02.641974Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"424.504931ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-sfc85\" ","response":"range_response_count:1 size:4283"}
	{"level":"info","ts":"2024-07-19T19:35:02.642009Z","caller":"traceutil/trace.go:171","msg":"trace[1778350074] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-sfc85; range_end:; response_count:1; response_revision:583; }","duration":"424.620554ms","start":"2024-07-19T19:35:02.217377Z","end":"2024-07-19T19:35:02.641998Z","steps":["trace[1778350074] 'range keys from in-memory index tree'  (duration: 424.364929ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T19:35:02.642047Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T19:35:02.217364Z","time spent":"424.675909ms","remote":"127.0.0.1:41346","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4305,"request content":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-sfc85\" "}
	{"level":"warn","ts":"2024-07-19T19:35:03.765317Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"646.955764ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17565323654602007970 > lease_revoke:<id:73c490cc7d51a424>","response":"size:27"}
	{"level":"info","ts":"2024-07-19T19:35:03.76618Z","caller":"traceutil/trace.go:171","msg":"trace[2065667287] linearizableReadLoop","detail":"{readStateIndex:630; appliedIndex:628; }","duration":"548.440105ms","start":"2024-07-19T19:35:03.217721Z","end":"2024-07-19T19:35:03.766161Z","steps":["trace[2065667287] 'read index received'  (duration: 163.611817ms)","trace[2065667287] 'applied index is now lower than readState.Index'  (duration: 384.827065ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T19:35:03.766356Z","caller":"traceutil/trace.go:171","msg":"trace[935772066] transaction","detail":"{read_only:false; response_revision:585; number_of_response:1; }","duration":"644.866409ms","start":"2024-07-19T19:35:03.121481Z","end":"2024-07-19T19:35:03.766347Z","steps":["trace[935772066] 'process raft request'  (duration: 644.077751ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T19:35:03.767031Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T19:35:03.121428Z","time spent":"645.123925ms","remote":"127.0.0.1:41248","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":776,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kube-scheduler-embed-certs-889918.17e3b447d0511b51\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-scheduler-embed-certs-889918.17e3b447d0511b51\" value_size:679 lease:8341951617747231911 >> failure:<>"}
	{"level":"warn","ts":"2024-07-19T19:35:04.168392Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"950.648276ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-sfc85\" ","response":"range_response_count:1 size:4283"}
	{"level":"info","ts":"2024-07-19T19:35:04.168467Z","caller":"traceutil/trace.go:171","msg":"trace[416421072] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-sfc85; range_end:; response_count:1; response_revision:585; }","duration":"950.762684ms","start":"2024-07-19T19:35:03.217689Z","end":"2024-07-19T19:35:04.168451Z","steps":["trace[416421072] 'agreement among raft nodes before linearized reading'  (duration: 549.6199ms)","trace[416421072] 'range keys from in-memory index tree'  (duration: 400.973836ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T19:35:04.168503Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T19:35:03.217673Z","time spent":"950.821676ms","remote":"127.0.0.1:41346","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4305,"request content":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-sfc85\" "}
	{"level":"warn","ts":"2024-07-19T19:35:04.168747Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"721.180245ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-19T19:35:04.168792Z","caller":"traceutil/trace.go:171","msg":"trace[251332242] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:585; }","duration":"721.246369ms","start":"2024-07-19T19:35:03.447534Z","end":"2024-07-19T19:35:04.16878Z","steps":["trace[251332242] 'agreement among raft nodes before linearized reading'  (duration: 319.793515ms)","trace[251332242] 'range keys from in-memory index tree'  (duration: 401.397634ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T19:35:04.168831Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T19:35:03.447493Z","time spent":"721.332384ms","remote":"127.0.0.1:41156","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-07-19T19:35:04.169356Z","caller":"traceutil/trace.go:171","msg":"trace[594046863] transaction","detail":"{read_only:false; response_revision:586; number_of_response:1; }","duration":"398.578882ms","start":"2024-07-19T19:35:03.770764Z","end":"2024-07-19T19:35:04.169343Z","steps":["trace[594046863] 'process raft request'  (duration: 301.447532ms)","trace[594046863] 'compare'  (duration: 95.973061ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T19:35:04.169464Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T19:35:03.770747Z","time spent":"398.678502ms","remote":"127.0.0.1:41248","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":776,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kube-scheduler-embed-certs-889918.17e3b447ef61a091\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-scheduler-embed-certs-889918.17e3b447ef61a091\" value_size:679 lease:8341951617747231911 >> failure:<>"}
	{"level":"info","ts":"2024-07-19T19:35:04.477831Z","caller":"traceutil/trace.go:171","msg":"trace[795511529] linearizableReadLoop","detail":"{readStateIndex:632; appliedIndex:631; }","duration":"298.444962ms","start":"2024-07-19T19:35:04.179366Z","end":"2024-07-19T19:35:04.477811Z","steps":["trace[795511529] 'read index received'  (duration: 265.220724ms)","trace[795511529] 'applied index is now lower than readState.Index'  (duration: 33.223679ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T19:35:04.478141Z","caller":"traceutil/trace.go:171","msg":"trace[1692419606] transaction","detail":"{read_only:false; response_revision:587; number_of_response:1; }","duration":"299.479538ms","start":"2024-07-19T19:35:04.178648Z","end":"2024-07-19T19:35:04.478127Z","steps":["trace[1692419606] 'process raft request'  (duration: 265.98541ms)","trace[1692419606] 'compare'  (duration: 33.094369ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T19:35:04.47841Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"299.028658ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-889918\" ","response":"range_response_count:1 size:5486"}
	{"level":"info","ts":"2024-07-19T19:35:04.47966Z","caller":"traceutil/trace.go:171","msg":"trace[1396925025] range","detail":"{range_begin:/registry/minions/embed-certs-889918; range_end:; response_count:1; response_revision:587; }","duration":"300.305148ms","start":"2024-07-19T19:35:04.179342Z","end":"2024-07-19T19:35:04.479647Z","steps":["trace[1396925025] 'agreement among raft nodes before linearized reading'  (duration: 298.978802ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T19:35:04.480176Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T19:35:04.17933Z","time spent":"300.826906ms","remote":"127.0.0.1:41344","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":1,"response size":5508,"request content":"key:\"/registry/minions/embed-certs-889918\" "}
	{"level":"warn","ts":"2024-07-19T19:35:04.478494Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.307987ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/extension-apiserver-authentication\" ","response":"range_response_count:1 size:3021"}
	{"level":"info","ts":"2024-07-19T19:35:04.482078Z","caller":"traceutil/trace.go:171","msg":"trace[348034883] range","detail":"{range_begin:/registry/configmaps/kube-system/extension-apiserver-authentication; range_end:; response_count:1; response_revision:587; }","duration":"129.917645ms","start":"2024-07-19T19:35:04.352146Z","end":"2024-07-19T19:35:04.482063Z","steps":["trace[348034883] 'agreement among raft nodes before linearized reading'  (duration: 126.305142ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T19:44:45.113531Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":833}
	{"level":"info","ts":"2024-07-19T19:44:45.12317Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":833,"took":"9.351791ms","hash":762878381,"current-db-size-bytes":2625536,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2625536,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-07-19T19:44:45.123223Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":762878381,"revision":833,"compact-revision":-1}
	
	
	==> kernel <==
	 19:48:14 up 13 min,  0 users,  load average: 0.01, 0.11, 0.09
	Linux embed-certs-889918 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5] <==
	I0719 19:42:47.322668       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 19:44:46.324890       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 19:44:46.325050       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0719 19:44:47.325813       1 handler_proxy.go:93] no RequestInfo found in the context
	W0719 19:44:47.325820       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 19:44:47.326175       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0719 19:44:47.326247       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0719 19:44:47.326304       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0719 19:44:47.327526       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 19:45:47.326975       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 19:45:47.327106       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0719 19:45:47.327113       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 19:45:47.328110       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 19:45:47.328197       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0719 19:45:47.328226       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 19:47:47.328001       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 19:47:47.328121       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0719 19:47:47.328129       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 19:47:47.329039       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 19:47:47.329148       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0719 19:47:47.329178       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498] <==
	I0719 19:42:29.385659       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:42:58.900264       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:42:59.393173       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:43:28.905889       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:43:29.401334       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:43:58.910852       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:43:59.410003       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:44:28.916157       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:44:29.418188       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:44:58.921503       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:44:59.426048       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:45:28.926325       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:45:29.433708       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0719 19:45:55.970161       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="289.794µs"
	E0719 19:45:58.931980       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:45:59.440745       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0719 19:46:08.969588       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="910.692µs"
	E0719 19:46:28.937261       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:46:29.450118       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:46:58.942492       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:46:59.457178       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:47:28.947103       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:47:29.466554       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:47:58.952375       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:47:59.474362       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e] <==
	I0719 19:34:47.537614       1 server_linux.go:69] "Using iptables proxy"
	I0719 19:34:47.554334       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.107"]
	I0719 19:34:47.617146       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 19:34:47.617197       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 19:34:47.617215       1 server_linux.go:165] "Using iptables Proxier"
	I0719 19:34:47.619304       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 19:34:47.619487       1 server.go:872] "Version info" version="v1.30.3"
	I0719 19:34:47.619512       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 19:34:47.621884       1 config.go:192] "Starting service config controller"
	I0719 19:34:47.621953       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 19:34:47.621985       1 config.go:101] "Starting endpoint slice config controller"
	I0719 19:34:47.622005       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 19:34:47.623158       1 config.go:319] "Starting node config controller"
	I0719 19:34:47.623182       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 19:34:47.722866       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 19:34:47.722983       1 shared_informer.go:320] Caches are synced for service config
	I0719 19:34:47.723214       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e] <==
	I0719 19:35:03.560743       1 serving.go:380] Generated self-signed cert in-memory
	I0719 19:35:04.507022       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0719 19:35:04.507065       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 19:35:04.511499       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0719 19:35:04.511713       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0719 19:35:04.512032       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 19:35:04.511651       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0719 19:35:04.512260       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0719 19:35:04.511723       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0719 19:35:04.511789       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0719 19:35:04.513644       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0719 19:35:04.612148       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 19:35:04.612398       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0719 19:35:04.614516       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Jul 19 19:45:43 embed-certs-889918 kubelet[932]: E0719 19:45:43.973014     932 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 19 19:45:43 embed-certs-889918 kubelet[932]: E0719 19:45:43.973235     932 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n6s99,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,Recurs
iveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false
,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-sfc85_kube-system(fc2a1efe-1728-42d9-bd3d-9eb29af783e9): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 19 19:45:43 embed-certs-889918 kubelet[932]: E0719 19:45:43.973269     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-sfc85" podUID="fc2a1efe-1728-42d9-bd3d-9eb29af783e9"
	Jul 19 19:45:55 embed-certs-889918 kubelet[932]: E0719 19:45:55.955595     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sfc85" podUID="fc2a1efe-1728-42d9-bd3d-9eb29af783e9"
	Jul 19 19:46:08 embed-certs-889918 kubelet[932]: E0719 19:46:08.955126     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sfc85" podUID="fc2a1efe-1728-42d9-bd3d-9eb29af783e9"
	Jul 19 19:46:23 embed-certs-889918 kubelet[932]: E0719 19:46:23.954380     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sfc85" podUID="fc2a1efe-1728-42d9-bd3d-9eb29af783e9"
	Jul 19 19:46:35 embed-certs-889918 kubelet[932]: E0719 19:46:35.955750     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sfc85" podUID="fc2a1efe-1728-42d9-bd3d-9eb29af783e9"
	Jul 19 19:46:41 embed-certs-889918 kubelet[932]: E0719 19:46:41.968169     932 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 19:46:41 embed-certs-889918 kubelet[932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 19:46:41 embed-certs-889918 kubelet[932]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 19:46:41 embed-certs-889918 kubelet[932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 19:46:41 embed-certs-889918 kubelet[932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 19:46:48 embed-certs-889918 kubelet[932]: E0719 19:46:48.954668     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sfc85" podUID="fc2a1efe-1728-42d9-bd3d-9eb29af783e9"
	Jul 19 19:47:02 embed-certs-889918 kubelet[932]: E0719 19:47:02.954630     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sfc85" podUID="fc2a1efe-1728-42d9-bd3d-9eb29af783e9"
	Jul 19 19:47:13 embed-certs-889918 kubelet[932]: E0719 19:47:13.954402     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sfc85" podUID="fc2a1efe-1728-42d9-bd3d-9eb29af783e9"
	Jul 19 19:47:27 embed-certs-889918 kubelet[932]: E0719 19:47:27.954809     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sfc85" podUID="fc2a1efe-1728-42d9-bd3d-9eb29af783e9"
	Jul 19 19:47:38 embed-certs-889918 kubelet[932]: E0719 19:47:38.954134     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sfc85" podUID="fc2a1efe-1728-42d9-bd3d-9eb29af783e9"
	Jul 19 19:47:41 embed-certs-889918 kubelet[932]: E0719 19:47:41.971035     932 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 19:47:41 embed-certs-889918 kubelet[932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 19:47:41 embed-certs-889918 kubelet[932]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 19:47:41 embed-certs-889918 kubelet[932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 19:47:41 embed-certs-889918 kubelet[932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 19:47:49 embed-certs-889918 kubelet[932]: E0719 19:47:49.954901     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sfc85" podUID="fc2a1efe-1728-42d9-bd3d-9eb29af783e9"
	Jul 19 19:48:00 embed-certs-889918 kubelet[932]: E0719 19:48:00.954802     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sfc85" podUID="fc2a1efe-1728-42d9-bd3d-9eb29af783e9"
	Jul 19 19:48:11 embed-certs-889918 kubelet[932]: E0719 19:48:11.956649     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sfc85" podUID="fc2a1efe-1728-42d9-bd3d-9eb29af783e9"
	
	
	==> storage-provisioner [aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040] <==
	I0719 19:35:18.240073       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 19:35:18.250264       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 19:35:18.250322       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0719 19:35:35.652631       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0719 19:35:35.653971       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-889918_3154b061-c2a8-42a6-b440-462d80044f1a!
	I0719 19:35:35.654966       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b5f04683-9026-4a64-8f24-aab1220d4b9e", APIVersion:"v1", ResourceVersion:"616", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-889918_3154b061-c2a8-42a6-b440-462d80044f1a became leader
	I0719 19:35:35.754708       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-889918_3154b061-c2a8-42a6-b440-462d80044f1a!
	
	
	==> storage-provisioner [f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91] <==
	I0719 19:34:47.482338       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0719 19:35:17.489129       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-889918 -n embed-certs-889918
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-889918 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-sfc85
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-889918 describe pod metrics-server-569cc877fc-sfc85
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-889918 describe pod metrics-server-569cc877fc-sfc85: exit status 1 (65.322677ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-sfc85" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-889918 describe pod metrics-server-569cc877fc-sfc85: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-578533 -n default-k8s-diff-port-578533
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-19 19:49:17.25348603 +0000 UTC m=+5731.517249013
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-578533 -n default-k8s-diff-port-578533
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-578533 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-578533 logs -n 25: (2.051110507s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p kubernetes-upgrade-821162                           | kubernetes-upgrade-821162    | jenkins | v1.33.1 | 19 Jul 24 19:25 UTC | 19 Jul 24 19:25 UTC |
	| start   | -p embed-certs-889918                                  | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:25 UTC | 19 Jul 24 19:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| pause   | -p pause-666146                                        | pause-666146                 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| unpause | -p pause-666146                                        | pause-666146                 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| pause   | -p pause-666146                                        | pause-666146                 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-666146                                        | pause-666146                 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-666146                                        | pause-666146                 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	| delete  | -p                                                     | disable-driver-mounts-994300 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | disable-driver-mounts-994300                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:27 UTC |
	|         | default-k8s-diff-port-578533                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-971041             | no-preload-971041            | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-971041                                   | no-preload-971041            | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-578533  | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:27 UTC | 19 Jul 24 19:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:27 UTC |                     |
	|         | default-k8s-diff-port-578533                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-889918            | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:27 UTC | 19 Jul 24 19:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-889918                                  | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-971041                  | no-preload-971041            | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-971041 --memory=2200                     | no-preload-971041            | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC | 19 Jul 24 19:40 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-570369        | old-k8s-version-570369       | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-578533       | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-889918                 | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:30 UTC | 19 Jul 24 19:40 UTC |
	|         | default-k8s-diff-port-578533                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-889918                                  | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:30 UTC | 19 Jul 24 19:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-570369                              | old-k8s-version-570369       | jenkins | v1.33.1 | 19 Jul 24 19:30 UTC | 19 Jul 24 19:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-570369             | old-k8s-version-570369       | jenkins | v1.33.1 | 19 Jul 24 19:31 UTC | 19 Jul 24 19:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-570369                              | old-k8s-version-570369       | jenkins | v1.33.1 | 19 Jul 24 19:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 19:31:02
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 19:31:02.001058   68995 out.go:291] Setting OutFile to fd 1 ...
	I0719 19:31:02.001478   68995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:31:02.001493   68995 out.go:304] Setting ErrFile to fd 2...
	I0719 19:31:02.001500   68995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:31:02.001955   68995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 19:31:02.002951   68995 out.go:298] Setting JSON to false
	I0719 19:31:02.003871   68995 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8005,"bootTime":1721409457,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 19:31:02.003932   68995 start.go:139] virtualization: kvm guest
	I0719 19:31:02.005683   68995 out.go:177] * [old-k8s-version-570369] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 19:31:02.007426   68995 notify.go:220] Checking for updates...
	I0719 19:31:02.007440   68995 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 19:31:02.008887   68995 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 19:31:02.010101   68995 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:31:02.011382   68995 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 19:31:02.012573   68995 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 19:31:02.013796   68995 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 19:31:02.015353   68995 config.go:182] Loaded profile config "old-k8s-version-570369": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0719 19:31:02.015711   68995 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:31:02.015750   68995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:31:02.030334   68995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42673
	I0719 19:31:02.030703   68995 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:31:02.031167   68995 main.go:141] libmachine: Using API Version  1
	I0719 19:31:02.031204   68995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:31:02.031520   68995 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:31:02.031688   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:31:02.033460   68995 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0719 19:31:02.034684   68995 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 19:31:02.034983   68995 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:31:02.035018   68995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:31:02.049101   68995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44905
	I0719 19:31:02.049495   68995 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:31:02.049992   68995 main.go:141] libmachine: Using API Version  1
	I0719 19:31:02.050006   68995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:31:02.050295   68995 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:31:02.050462   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:31:02.085393   68995 out.go:177] * Using the kvm2 driver based on existing profile
	I0719 19:31:02.086642   68995 start.go:297] selected driver: kvm2
	I0719 19:31:02.086658   68995 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-570369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:31:02.086766   68995 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 19:31:02.087430   68995 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 19:31:02.087497   68995 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19307-14841/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 19:31:02.102357   68995 install.go:137] /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0719 19:31:02.102731   68995 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 19:31:02.102797   68995 cni.go:84] Creating CNI manager for ""
	I0719 19:31:02.102814   68995 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:31:02.102857   68995 start.go:340] cluster config:
	{Name:old-k8s-version-570369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570369 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:31:02.102975   68995 iso.go:125] acquiring lock: {Name:mka126a3710ab8a115eb2a3299b735524430e5f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 19:31:02.104839   68995 out.go:177] * Starting "old-k8s-version-570369" primary control-plane node in "old-k8s-version-570369" cluster
	I0719 19:30:58.496088   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:02.106080   68995 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 19:31:02.106120   68995 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0719 19:31:02.106127   68995 cache.go:56] Caching tarball of preloaded images
	I0719 19:31:02.106200   68995 preload.go:172] Found /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 19:31:02.106217   68995 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0719 19:31:02.106343   68995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/config.json ...
	I0719 19:31:02.106543   68995 start.go:360] acquireMachinesLock for old-k8s-version-570369: {Name:mk45aeee801587237bb965fa260501e9fac6f233 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 19:31:04.576109   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:07.648198   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:13.728139   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:16.800162   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:22.880170   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:25.952158   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:32.032094   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:35.104127   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:41.184155   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:44.256157   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:50.336135   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:53.408126   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:59.488236   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:02.560149   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:08.640107   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:11.712145   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:17.792126   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:20.864127   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:26.944130   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:30.016118   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:36.096094   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:39.168160   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:45.248117   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:48.320210   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:54.400118   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:57.472159   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:03.552119   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:06.624189   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:12.704169   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:15.776085   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:21.856121   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:24.928169   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:31.008143   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:34.080169   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:40.160126   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:43.232131   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:49.312133   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:52.384155   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:55.388766   68536 start.go:364] duration metric: took 3m51.88250847s to acquireMachinesLock for "default-k8s-diff-port-578533"
	I0719 19:33:55.388818   68536 start.go:96] Skipping create...Using existing machine configuration
	I0719 19:33:55.388827   68536 fix.go:54] fixHost starting: 
	I0719 19:33:55.389286   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:33:55.389320   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:33:55.405253   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44877
	I0719 19:33:55.405681   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:33:55.406171   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:33:55.406199   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:33:55.406526   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:33:55.406725   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:33:55.406877   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetState
	I0719 19:33:55.408620   68536 fix.go:112] recreateIfNeeded on default-k8s-diff-port-578533: state=Stopped err=<nil>
	I0719 19:33:55.408664   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	W0719 19:33:55.408859   68536 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 19:33:55.410653   68536 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-578533" ...
	I0719 19:33:55.386190   68019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:33:55.386229   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetMachineName
	I0719 19:33:55.386570   68019 buildroot.go:166] provisioning hostname "no-preload-971041"
	I0719 19:33:55.386604   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetMachineName
	I0719 19:33:55.386798   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:33:55.388625   68019 machine.go:97] duration metric: took 4m37.402512771s to provisionDockerMachine
	I0719 19:33:55.388679   68019 fix.go:56] duration metric: took 4m37.424232342s for fixHost
	I0719 19:33:55.388689   68019 start.go:83] releasing machines lock for "no-preload-971041", held for 4m37.424257446s
	W0719 19:33:55.388716   68019 start.go:714] error starting host: provision: host is not running
	W0719 19:33:55.388829   68019 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0719 19:33:55.388840   68019 start.go:729] Will try again in 5 seconds ...
	I0719 19:33:55.412077   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Start
	I0719 19:33:55.412257   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Ensuring networks are active...
	I0719 19:33:55.413065   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Ensuring network default is active
	I0719 19:33:55.413504   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Ensuring network mk-default-k8s-diff-port-578533 is active
	I0719 19:33:55.413931   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Getting domain xml...
	I0719 19:33:55.414619   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Creating domain...
	I0719 19:33:56.637676   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting to get IP...
	I0719 19:33:56.638545   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:56.638912   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:56.638971   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:56.638901   69604 retry.go:31] will retry after 210.865266ms: waiting for machine to come up
	I0719 19:33:56.851496   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:56.851959   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:56.851986   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:56.851920   69604 retry.go:31] will retry after 336.006729ms: waiting for machine to come up
	I0719 19:33:57.189571   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:57.189916   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:57.189945   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:57.189885   69604 retry.go:31] will retry after 325.384423ms: waiting for machine to come up
	I0719 19:33:57.516504   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:57.517043   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:57.517072   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:57.516989   69604 retry.go:31] will retry after 438.127108ms: waiting for machine to come up
	I0719 19:33:57.956641   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:57.957091   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:57.957110   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:57.957066   69604 retry.go:31] will retry after 656.518154ms: waiting for machine to come up
	I0719 19:34:00.391238   68019 start.go:360] acquireMachinesLock for no-preload-971041: {Name:mk45aeee801587237bb965fa260501e9fac6f233 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 19:33:58.614988   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:58.615352   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:58.615398   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:58.615292   69604 retry.go:31] will retry after 854.967551ms: waiting for machine to come up
	I0719 19:33:59.471470   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:59.471922   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:59.471950   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:59.471872   69604 retry.go:31] will retry after 1.032891444s: waiting for machine to come up
	I0719 19:34:00.506554   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:00.507020   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:34:00.507051   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:34:00.506976   69604 retry.go:31] will retry after 1.30559154s: waiting for machine to come up
	I0719 19:34:01.814516   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:01.814877   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:34:01.814903   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:34:01.814817   69604 retry.go:31] will retry after 1.152581751s: waiting for machine to come up
	I0719 19:34:02.968913   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:02.969375   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:34:02.969399   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:34:02.969317   69604 retry.go:31] will retry after 1.415187183s: waiting for machine to come up
	I0719 19:34:04.386365   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:04.386906   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:34:04.386937   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:34:04.386839   69604 retry.go:31] will retry after 2.854689482s: waiting for machine to come up
	I0719 19:34:07.243500   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:07.244001   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:34:07.244031   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:34:07.243865   69604 retry.go:31] will retry after 2.536982272s: waiting for machine to come up
	I0719 19:34:09.783749   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:09.784224   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:34:09.784256   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:34:09.784163   69604 retry.go:31] will retry after 3.046898328s: waiting for machine to come up
	I0719 19:34:12.833907   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.834440   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Found IP for machine: 192.168.72.104
	I0719 19:34:12.834468   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Reserving static IP address...
	I0719 19:34:12.834484   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has current primary IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.835035   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-578533", mac: "52:54:00:85:b3:02", ip: "192.168.72.104"} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:12.835064   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | skip adding static IP to network mk-default-k8s-diff-port-578533 - found existing host DHCP lease matching {name: "default-k8s-diff-port-578533", mac: "52:54:00:85:b3:02", ip: "192.168.72.104"}
	I0719 19:34:12.835078   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Reserved static IP address: 192.168.72.104
	I0719 19:34:12.835094   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for SSH to be available...
	I0719 19:34:12.835109   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Getting to WaitForSSH function...
	I0719 19:34:12.837426   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.837740   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:12.837766   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.837864   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Using SSH client type: external
	I0719 19:34:12.837885   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Using SSH private key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa (-rw-------)
	I0719 19:34:12.837916   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 19:34:12.837930   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | About to run SSH command:
	I0719 19:34:12.837946   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | exit 0
	I0719 19:34:12.959728   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | SSH cmd err, output: <nil>: 
	I0719 19:34:12.960118   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetConfigRaw
	I0719 19:34:12.960737   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetIP
	I0719 19:34:12.963250   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.963645   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:12.963675   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.963868   68536 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/config.json ...
	I0719 19:34:12.964074   68536 machine.go:94] provisionDockerMachine start ...
	I0719 19:34:12.964092   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:34:12.964326   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:12.966667   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.966982   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:12.967017   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.967148   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:12.967318   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:12.967567   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:12.967727   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:12.967907   68536 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:12.968155   68536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0719 19:34:12.968169   68536 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 19:34:13.063641   68536 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 19:34:13.063685   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetMachineName
	I0719 19:34:13.064015   68536 buildroot.go:166] provisioning hostname "default-k8s-diff-port-578533"
	I0719 19:34:13.064045   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetMachineName
	I0719 19:34:13.064242   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.067158   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.067658   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.067783   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.067785   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.067994   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.068150   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.068309   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.068438   68536 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:13.068638   68536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0719 19:34:13.068653   68536 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-578533 && echo "default-k8s-diff-port-578533" | sudo tee /etc/hostname
	I0719 19:34:13.182279   68536 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-578533
	
	I0719 19:34:13.182304   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.185081   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.185417   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.185440   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.185585   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.185761   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.185925   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.186102   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.186237   68536 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:13.186400   68536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0719 19:34:13.186415   68536 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-578533' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-578533/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-578533' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 19:34:13.292203   68536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:34:13.292246   68536 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 19:34:13.292276   68536 buildroot.go:174] setting up certificates
	I0719 19:34:13.292285   68536 provision.go:84] configureAuth start
	I0719 19:34:13.292315   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetMachineName
	I0719 19:34:13.292630   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetIP
	I0719 19:34:13.295150   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.295466   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.295494   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.295628   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.297608   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.297896   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.297924   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.298039   68536 provision.go:143] copyHostCerts
	I0719 19:34:13.298129   68536 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 19:34:13.298148   68536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 19:34:13.298233   68536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 19:34:13.298352   68536 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 19:34:13.298364   68536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 19:34:13.298405   68536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 19:34:13.298491   68536 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 19:34:13.298503   68536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 19:34:13.298530   68536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 19:34:13.298587   68536 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-578533 san=[127.0.0.1 192.168.72.104 default-k8s-diff-port-578533 localhost minikube]
	I0719 19:34:13.367868   68536 provision.go:177] copyRemoteCerts
	I0719 19:34:13.367923   68536 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 19:34:13.367947   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.370508   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.370761   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.370783   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.370975   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.371154   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.371331   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.371456   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:34:13.984609   68617 start.go:364] duration metric: took 4m7.413784663s to acquireMachinesLock for "embed-certs-889918"
	I0719 19:34:13.984726   68617 start.go:96] Skipping create...Using existing machine configuration
	I0719 19:34:13.984746   68617 fix.go:54] fixHost starting: 
	I0719 19:34:13.985263   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:13.985332   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:14.001938   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42041
	I0719 19:34:14.002372   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:14.002864   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:14.002890   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:14.003223   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:14.003400   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:14.003545   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetState
	I0719 19:34:14.005102   68617 fix.go:112] recreateIfNeeded on embed-certs-889918: state=Stopped err=<nil>
	I0719 19:34:14.005141   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	W0719 19:34:14.005320   68617 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 19:34:14.007389   68617 out.go:177] * Restarting existing kvm2 VM for "embed-certs-889918" ...
	I0719 19:34:13.449436   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 19:34:13.472085   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0719 19:34:13.493354   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 19:34:13.515145   68536 provision.go:87] duration metric: took 222.847636ms to configureAuth
	I0719 19:34:13.515179   68536 buildroot.go:189] setting minikube options for container-runtime
	I0719 19:34:13.515350   68536 config.go:182] Loaded profile config "default-k8s-diff-port-578533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:34:13.515430   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.517974   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.518417   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.518449   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.518608   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.518818   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.519077   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.519266   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.519452   68536 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:13.519605   68536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0719 19:34:13.519619   68536 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 19:34:13.764107   68536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 19:34:13.764143   68536 machine.go:97] duration metric: took 800.055026ms to provisionDockerMachine
	I0719 19:34:13.764155   68536 start.go:293] postStartSetup for "default-k8s-diff-port-578533" (driver="kvm2")
	I0719 19:34:13.764165   68536 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 19:34:13.764182   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:34:13.764505   68536 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 19:34:13.764535   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.767239   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.767669   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.767692   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.767870   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.768050   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.768210   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.768356   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:34:13.846203   68536 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 19:34:13.850376   68536 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 19:34:13.850403   68536 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 19:34:13.850473   68536 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 19:34:13.850562   68536 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 19:34:13.850672   68536 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 19:34:13.859405   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:34:13.880565   68536 start.go:296] duration metric: took 116.397021ms for postStartSetup
	I0719 19:34:13.880605   68536 fix.go:56] duration metric: took 18.491777449s for fixHost
	I0719 19:34:13.880628   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.883335   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.883686   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.883716   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.883906   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.884071   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.884245   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.884390   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.884526   68536 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:13.884730   68536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0719 19:34:13.884743   68536 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 19:34:13.984426   68536 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721417653.961477719
	
	I0719 19:34:13.984454   68536 fix.go:216] guest clock: 1721417653.961477719
	I0719 19:34:13.984464   68536 fix.go:229] Guest: 2024-07-19 19:34:13.961477719 +0000 UTC Remote: 2024-07-19 19:34:13.88060979 +0000 UTC m=+250.513369231 (delta=80.867929ms)
	I0719 19:34:13.984498   68536 fix.go:200] guest clock delta is within tolerance: 80.867929ms
	I0719 19:34:13.984503   68536 start.go:83] releasing machines lock for "default-k8s-diff-port-578533", held for 18.59570944s
	I0719 19:34:13.984529   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:34:13.984822   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetIP
	I0719 19:34:13.987630   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.988146   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.988165   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.988322   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:34:13.988760   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:34:13.988961   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:34:13.989046   68536 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 19:34:13.989094   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.989197   68536 ssh_runner.go:195] Run: cat /version.json
	I0719 19:34:13.989218   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.991820   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.992135   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.992285   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.992329   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.992419   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.992558   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.992584   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.992608   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.992695   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.992798   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.992815   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.992957   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:34:13.992978   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.993119   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:34:14.102452   68536 ssh_runner.go:195] Run: systemctl --version
	I0719 19:34:14.108454   68536 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 19:34:14.256762   68536 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 19:34:14.263615   68536 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 19:34:14.263685   68536 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 19:34:14.280375   68536 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 19:34:14.280396   68536 start.go:495] detecting cgroup driver to use...
	I0719 19:34:14.280465   68536 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 19:34:14.297524   68536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 19:34:14.312273   68536 docker.go:217] disabling cri-docker service (if available) ...
	I0719 19:34:14.312334   68536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 19:34:14.326334   68536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 19:34:14.341338   68536 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 19:34:14.465631   68536 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 19:34:14.603249   68536 docker.go:233] disabling docker service ...
	I0719 19:34:14.603333   68536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 19:34:14.617420   68536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 19:34:14.630612   68536 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 19:34:14.776325   68536 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 19:34:14.890106   68536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 19:34:14.904436   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 19:34:14.921845   68536 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 19:34:14.921921   68536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:14.932155   68536 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 19:34:14.932220   68536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:14.942999   68536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:14.955449   68536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:14.966646   68536 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 19:34:14.977147   68536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:14.988224   68536 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:15.005941   68536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:15.018341   68536 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 19:34:15.028418   68536 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 19:34:15.028502   68536 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 19:34:15.040704   68536 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 19:34:15.050734   68536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:34:15.159960   68536 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 19:34:15.323020   68536 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 19:34:15.323095   68536 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 19:34:15.327549   68536 start.go:563] Will wait 60s for crictl version
	I0719 19:34:15.327616   68536 ssh_runner.go:195] Run: which crictl
	I0719 19:34:15.330907   68536 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 19:34:15.369980   68536 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 19:34:15.370062   68536 ssh_runner.go:195] Run: crio --version
	I0719 19:34:15.399065   68536 ssh_runner.go:195] Run: crio --version
	I0719 19:34:15.426462   68536 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 19:34:14.008924   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Start
	I0719 19:34:14.009109   68617 main.go:141] libmachine: (embed-certs-889918) Ensuring networks are active...
	I0719 19:34:14.009827   68617 main.go:141] libmachine: (embed-certs-889918) Ensuring network default is active
	I0719 19:34:14.010200   68617 main.go:141] libmachine: (embed-certs-889918) Ensuring network mk-embed-certs-889918 is active
	I0719 19:34:14.010575   68617 main.go:141] libmachine: (embed-certs-889918) Getting domain xml...
	I0719 19:34:14.011168   68617 main.go:141] libmachine: (embed-certs-889918) Creating domain...
	I0719 19:34:15.280485   68617 main.go:141] libmachine: (embed-certs-889918) Waiting to get IP...
	I0719 19:34:15.281463   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:15.281945   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:15.282019   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:15.281911   69727 retry.go:31] will retry after 309.614436ms: waiting for machine to come up
	I0719 19:34:15.593592   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:15.594042   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:15.594082   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:15.593990   69727 retry.go:31] will retry after 302.723172ms: waiting for machine to come up
	I0719 19:34:15.898674   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:15.899173   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:15.899203   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:15.899125   69727 retry.go:31] will retry after 419.891108ms: waiting for machine to come up
	I0719 19:34:16.320907   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:16.321399   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:16.321444   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:16.321365   69727 retry.go:31] will retry after 528.113112ms: waiting for machine to come up
	I0719 19:34:15.427710   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetIP
	I0719 19:34:15.430887   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:15.431461   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:15.431493   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:15.431702   68536 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0719 19:34:15.435670   68536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:34:15.447909   68536 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-578533 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-578533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.104 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 19:34:15.448033   68536 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 19:34:15.448080   68536 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:34:15.483687   68536 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0719 19:34:15.483774   68536 ssh_runner.go:195] Run: which lz4
	I0719 19:34:15.487578   68536 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 19:34:15.491770   68536 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 19:34:15.491799   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0719 19:34:16.808717   68536 crio.go:462] duration metric: took 1.321185682s to copy over tarball
	I0719 19:34:16.808781   68536 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 19:34:16.850971   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:16.851465   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:16.851497   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:16.851413   69727 retry.go:31] will retry after 734.178646ms: waiting for machine to come up
	I0719 19:34:17.587364   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:17.587763   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:17.587781   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:17.587722   69727 retry.go:31] will retry after 818.269359ms: waiting for machine to come up
	I0719 19:34:18.407178   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:18.407788   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:18.407815   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:18.407741   69727 retry.go:31] will retry after 1.018825067s: waiting for machine to come up
	I0719 19:34:19.427765   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:19.428247   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:19.428277   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:19.428196   69727 retry.go:31] will retry after 1.005679734s: waiting for machine to come up
	I0719 19:34:20.435288   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:20.435661   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:20.435693   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:20.435611   69727 retry.go:31] will retry after 1.648093342s: waiting for machine to come up
	I0719 19:34:19.006661   68536 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.197857661s)
	I0719 19:34:19.006696   68536 crio.go:469] duration metric: took 2.197951458s to extract the tarball
	I0719 19:34:19.006707   68536 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 19:34:19.043352   68536 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:34:19.083293   68536 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 19:34:19.083316   68536 cache_images.go:84] Images are preloaded, skipping loading
	I0719 19:34:19.083333   68536 kubeadm.go:934] updating node { 192.168.72.104 8444 v1.30.3 crio true true} ...
	I0719 19:34:19.083489   68536 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-578533 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-578533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 19:34:19.083593   68536 ssh_runner.go:195] Run: crio config
	I0719 19:34:19.129638   68536 cni.go:84] Creating CNI manager for ""
	I0719 19:34:19.129667   68536 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:34:19.129697   68536 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 19:34:19.129730   68536 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.104 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-578533 NodeName:default-k8s-diff-port-578533 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 19:34:19.129908   68536 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.104
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-578533"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 19:34:19.129996   68536 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 19:34:19.139345   68536 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 19:34:19.139423   68536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 19:34:19.147942   68536 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0719 19:34:19.163170   68536 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 19:34:19.179000   68536 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0719 19:34:19.195196   68536 ssh_runner.go:195] Run: grep 192.168.72.104	control-plane.minikube.internal$ /etc/hosts
	I0719 19:34:19.198941   68536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:34:19.210202   68536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:34:19.326530   68536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:34:19.343318   68536 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533 for IP: 192.168.72.104
	I0719 19:34:19.343352   68536 certs.go:194] generating shared ca certs ...
	I0719 19:34:19.343371   68536 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:34:19.343537   68536 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 19:34:19.343600   68536 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 19:34:19.343614   68536 certs.go:256] generating profile certs ...
	I0719 19:34:19.343739   68536 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/client.key
	I0719 19:34:19.343808   68536 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/apiserver.key.0a2b4bda
	I0719 19:34:19.343876   68536 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/proxy-client.key
	I0719 19:34:19.344051   68536 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 19:34:19.344112   68536 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 19:34:19.344126   68536 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 19:34:19.344155   68536 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 19:34:19.344183   68536 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 19:34:19.344211   68536 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 19:34:19.344270   68536 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:34:19.345114   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 19:34:19.382756   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 19:34:19.414239   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 19:34:19.448517   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 19:34:19.491082   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0719 19:34:19.514344   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 19:34:19.546679   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 19:34:19.569407   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 19:34:19.591638   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 19:34:19.613336   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 19:34:19.635118   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 19:34:19.656675   68536 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 19:34:19.672292   68536 ssh_runner.go:195] Run: openssl version
	I0719 19:34:19.678232   68536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 19:34:19.688472   68536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 19:34:19.692656   68536 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 19:34:19.692714   68536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 19:34:19.698240   68536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 19:34:19.707912   68536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 19:34:19.718120   68536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:34:19.722206   68536 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:34:19.722257   68536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:34:19.727641   68536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 19:34:19.737694   68536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 19:34:19.747354   68536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 19:34:19.751343   68536 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 19:34:19.751389   68536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 19:34:19.756539   68536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 19:34:19.766256   68536 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 19:34:19.770309   68536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 19:34:19.775957   68536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 19:34:19.781384   68536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 19:34:19.786780   68536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 19:34:19.792104   68536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 19:34:19.797536   68536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 19:34:19.802918   68536 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-578533 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-578533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.104 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:34:19.803026   68536 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 19:34:19.803100   68536 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:34:19.837950   68536 cri.go:89] found id: ""
	I0719 19:34:19.838029   68536 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 19:34:19.847639   68536 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 19:34:19.847667   68536 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 19:34:19.847718   68536 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 19:34:19.856631   68536 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 19:34:19.857641   68536 kubeconfig.go:125] found "default-k8s-diff-port-578533" server: "https://192.168.72.104:8444"
	I0719 19:34:19.859821   68536 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 19:34:19.868724   68536 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.104
	I0719 19:34:19.868764   68536 kubeadm.go:1160] stopping kube-system containers ...
	I0719 19:34:19.868778   68536 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 19:34:19.868844   68536 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:34:19.907378   68536 cri.go:89] found id: ""
	I0719 19:34:19.907445   68536 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 19:34:19.923744   68536 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:34:19.933425   68536 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:34:19.933444   68536 kubeadm.go:157] found existing configuration files:
	
	I0719 19:34:19.933508   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0719 19:34:19.942406   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:34:19.942458   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:34:19.951275   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0719 19:34:19.960233   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:34:19.960296   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:34:19.969289   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0719 19:34:19.977401   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:34:19.977458   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:34:19.986298   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0719 19:34:19.994572   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:34:19.994640   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:34:20.003270   68536 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:34:20.012009   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:20.120255   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:21.398914   68536 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.278623256s)
	I0719 19:34:21.398951   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:21.594827   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:21.650132   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:21.717440   68536 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:34:21.717566   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:22.218530   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:22.718618   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:23.218370   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:22.084801   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:22.085201   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:22.085230   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:22.085166   69727 retry.go:31] will retry after 1.686559667s: waiting for machine to come up
	I0719 19:34:23.773486   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:23.774013   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:23.774039   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:23.773982   69727 retry.go:31] will retry after 2.360164076s: waiting for machine to come up
	I0719 19:34:26.136972   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:26.137506   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:26.137528   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:26.137451   69727 retry.go:31] will retry after 2.922244999s: waiting for machine to come up
	I0719 19:34:23.717755   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:24.217721   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:24.232459   68536 api_server.go:72] duration metric: took 2.515020251s to wait for apiserver process to appear ...
	I0719 19:34:24.232486   68536 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:34:24.232513   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:29.061452   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:29.062023   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:29.062056   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:29.061975   69727 retry.go:31] will retry after 3.928406856s: waiting for machine to come up
	I0719 19:34:29.232943   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:34:29.233002   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:34.468635   68995 start.go:364] duration metric: took 3m32.362055744s to acquireMachinesLock for "old-k8s-version-570369"
	I0719 19:34:34.468688   68995 start.go:96] Skipping create...Using existing machine configuration
	I0719 19:34:34.468696   68995 fix.go:54] fixHost starting: 
	I0719 19:34:34.469206   68995 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:34.469246   68995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:34.488843   68995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36477
	I0719 19:34:34.489238   68995 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:34.489682   68995 main.go:141] libmachine: Using API Version  1
	I0719 19:34:34.489708   68995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:34.490007   68995 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:34.490191   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:34.490327   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetState
	I0719 19:34:34.491830   68995 fix.go:112] recreateIfNeeded on old-k8s-version-570369: state=Stopped err=<nil>
	I0719 19:34:34.491875   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	W0719 19:34:34.492029   68995 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 19:34:34.493991   68995 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-570369" ...
	I0719 19:34:32.995044   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:32.995539   68617 main.go:141] libmachine: (embed-certs-889918) Found IP for machine: 192.168.61.107
	I0719 19:34:32.995571   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has current primary IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:32.995583   68617 main.go:141] libmachine: (embed-certs-889918) Reserving static IP address...
	I0719 19:34:32.996234   68617 main.go:141] libmachine: (embed-certs-889918) Reserved static IP address: 192.168.61.107
	I0719 19:34:32.996289   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "embed-certs-889918", mac: "52:54:00:5d:44:ce", ip: "192.168.61.107"} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:32.996409   68617 main.go:141] libmachine: (embed-certs-889918) Waiting for SSH to be available...
	I0719 19:34:32.996438   68617 main.go:141] libmachine: (embed-certs-889918) DBG | skip adding static IP to network mk-embed-certs-889918 - found existing host DHCP lease matching {name: "embed-certs-889918", mac: "52:54:00:5d:44:ce", ip: "192.168.61.107"}
	I0719 19:34:32.996476   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Getting to WaitForSSH function...
	I0719 19:34:32.998802   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:32.999144   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:32.999166   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:32.999286   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Using SSH client type: external
	I0719 19:34:32.999310   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Using SSH private key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa (-rw-------)
	I0719 19:34:32.999352   68617 main.go:141] libmachine: (embed-certs-889918) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 19:34:32.999371   68617 main.go:141] libmachine: (embed-certs-889918) DBG | About to run SSH command:
	I0719 19:34:32.999404   68617 main.go:141] libmachine: (embed-certs-889918) DBG | exit 0
	I0719 19:34:33.128231   68617 main.go:141] libmachine: (embed-certs-889918) DBG | SSH cmd err, output: <nil>: 
	I0719 19:34:33.128662   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetConfigRaw
	I0719 19:34:33.129434   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetIP
	I0719 19:34:33.132282   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.132578   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.132613   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.132825   68617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/config.json ...
	I0719 19:34:33.133051   68617 machine.go:94] provisionDockerMachine start ...
	I0719 19:34:33.133071   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:33.133272   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:33.135294   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.135617   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.135645   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.135727   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:33.135898   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.136064   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.136208   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:33.136358   68617 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:33.136534   68617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0719 19:34:33.136545   68617 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 19:34:33.247702   68617 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 19:34:33.247736   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetMachineName
	I0719 19:34:33.248042   68617 buildroot.go:166] provisioning hostname "embed-certs-889918"
	I0719 19:34:33.248072   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetMachineName
	I0719 19:34:33.248288   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:33.251010   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.251506   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.251536   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.251739   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:33.251936   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.252136   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.252329   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:33.252586   68617 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:33.252822   68617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0719 19:34:33.252848   68617 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-889918 && echo "embed-certs-889918" | sudo tee /etc/hostname
	I0719 19:34:33.379987   68617 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-889918
	
	I0719 19:34:33.380013   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:33.382728   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.383037   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.383059   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.383212   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:33.383482   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.383662   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.383805   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:33.384004   68617 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:33.384193   68617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0719 19:34:33.384211   68617 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-889918' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-889918/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-889918' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 19:34:33.503798   68617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:34:33.503827   68617 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 19:34:33.503870   68617 buildroot.go:174] setting up certificates
	I0719 19:34:33.503884   68617 provision.go:84] configureAuth start
	I0719 19:34:33.503896   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetMachineName
	I0719 19:34:33.504125   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetIP
	I0719 19:34:33.506371   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.506777   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.506809   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.506914   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:33.509368   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.509751   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.509783   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.509892   68617 provision.go:143] copyHostCerts
	I0719 19:34:33.509957   68617 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 19:34:33.509967   68617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 19:34:33.510035   68617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 19:34:33.510150   68617 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 19:34:33.510163   68617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 19:34:33.510194   68617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 19:34:33.510269   68617 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 19:34:33.510280   68617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 19:34:33.510306   68617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 19:34:33.510369   68617 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.embed-certs-889918 san=[127.0.0.1 192.168.61.107 embed-certs-889918 localhost minikube]
	I0719 19:34:33.799515   68617 provision.go:177] copyRemoteCerts
	I0719 19:34:33.799571   68617 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 19:34:33.799595   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:33.802495   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.802789   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.802817   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.803020   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:33.803245   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.803453   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:33.803618   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:33.889555   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 19:34:33.912176   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0719 19:34:33.934472   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 19:34:33.956877   68617 provision.go:87] duration metric: took 452.977402ms to configureAuth
	I0719 19:34:33.956906   68617 buildroot.go:189] setting minikube options for container-runtime
	I0719 19:34:33.957113   68617 config.go:182] Loaded profile config "embed-certs-889918": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:34:33.957190   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:33.960120   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.960551   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.960583   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.960761   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:33.960948   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.961094   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.961221   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:33.961368   68617 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:33.961587   68617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0719 19:34:33.961608   68617 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 19:34:34.222464   68617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 19:34:34.222508   68617 machine.go:97] duration metric: took 1.089428018s to provisionDockerMachine
	I0719 19:34:34.222519   68617 start.go:293] postStartSetup for "embed-certs-889918" (driver="kvm2")
	I0719 19:34:34.222530   68617 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 19:34:34.222544   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:34.222899   68617 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 19:34:34.222924   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:34.225636   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.226063   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:34.226093   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.226259   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:34.226491   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:34.226686   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:34.226918   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:34.314782   68617 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 19:34:34.318940   68617 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 19:34:34.318966   68617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 19:34:34.319044   68617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 19:34:34.319122   68617 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 19:34:34.319223   68617 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 19:34:34.328873   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:34:34.351760   68617 start.go:296] duration metric: took 129.22587ms for postStartSetup
	I0719 19:34:34.351804   68617 fix.go:56] duration metric: took 20.367059026s for fixHost
	I0719 19:34:34.351821   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:34.354759   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.355195   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:34.355228   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.355405   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:34.355619   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:34.355821   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:34.355994   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:34.356154   68617 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:34.356371   68617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0719 19:34:34.356386   68617 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 19:34:34.468486   68617 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721417674.441309314
	
	I0719 19:34:34.468508   68617 fix.go:216] guest clock: 1721417674.441309314
	I0719 19:34:34.468514   68617 fix.go:229] Guest: 2024-07-19 19:34:34.441309314 +0000 UTC Remote: 2024-07-19 19:34:34.35180736 +0000 UTC m=+267.913178040 (delta=89.501954ms)
	I0719 19:34:34.468531   68617 fix.go:200] guest clock delta is within tolerance: 89.501954ms
	I0719 19:34:34.468535   68617 start.go:83] releasing machines lock for "embed-certs-889918", held for 20.483881359s
	I0719 19:34:34.468559   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:34.468844   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetIP
	I0719 19:34:34.471920   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.472311   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:34.472346   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.472517   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:34.473026   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:34.473229   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:34.473305   68617 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 19:34:34.473364   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:34.473484   68617 ssh_runner.go:195] Run: cat /version.json
	I0719 19:34:34.473513   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:34.476087   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.476262   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.476545   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:34.476571   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.476614   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:34.476834   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:34.476835   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.476988   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:34.477070   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:34.477226   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:34.477293   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:34.477383   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:34.477453   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:34.477536   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:34.560612   68617 ssh_runner.go:195] Run: systemctl --version
	I0719 19:34:34.593072   68617 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 19:34:34.734988   68617 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 19:34:34.741527   68617 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 19:34:34.741637   68617 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 19:34:34.758845   68617 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 19:34:34.758869   68617 start.go:495] detecting cgroup driver to use...
	I0719 19:34:34.758924   68617 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 19:34:34.774350   68617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 19:34:34.787003   68617 docker.go:217] disabling cri-docker service (if available) ...
	I0719 19:34:34.787074   68617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 19:34:34.801080   68617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 19:34:34.815679   68617 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 19:34:34.940970   68617 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 19:34:35.101668   68617 docker.go:233] disabling docker service ...
	I0719 19:34:35.101744   68617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 19:34:35.116074   68617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 19:34:35.132085   68617 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 19:34:35.254387   68617 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 19:34:35.380270   68617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 19:34:35.394387   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 19:34:35.411794   68617 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 19:34:35.411877   68617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.421927   68617 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 19:34:35.421978   68617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.432790   68617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.445416   68617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.457170   68617 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 19:34:35.467762   68617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.478775   68617 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.498894   68617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.509643   68617 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 19:34:35.520089   68617 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 19:34:35.520138   68617 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 19:34:35.532934   68617 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 19:34:35.543249   68617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:34:35.702678   68617 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 19:34:35.843858   68617 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 19:34:35.843944   68617 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 19:34:35.848599   68617 start.go:563] Will wait 60s for crictl version
	I0719 19:34:35.848677   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:34:35.852436   68617 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 19:34:35.889698   68617 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 19:34:35.889782   68617 ssh_runner.go:195] Run: crio --version
	I0719 19:34:35.917297   68617 ssh_runner.go:195] Run: crio --version
	I0719 19:34:35.948248   68617 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 19:34:35.949617   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetIP
	I0719 19:34:35.952938   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:35.953304   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:35.953340   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:35.953554   68617 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0719 19:34:35.957815   68617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:34:35.971246   68617 kubeadm.go:883] updating cluster {Name:embed-certs-889918 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-889918 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 19:34:35.971572   68617 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 19:34:35.971628   68617 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:34:36.012019   68617 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0719 19:34:36.012093   68617 ssh_runner.go:195] Run: which lz4
	I0719 19:34:36.017026   68617 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 19:34:36.021166   68617 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 19:34:36.021200   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0719 19:34:34.495371   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .Start
	I0719 19:34:34.495553   68995 main.go:141] libmachine: (old-k8s-version-570369) Ensuring networks are active...
	I0719 19:34:34.496391   68995 main.go:141] libmachine: (old-k8s-version-570369) Ensuring network default is active
	I0719 19:34:34.496864   68995 main.go:141] libmachine: (old-k8s-version-570369) Ensuring network mk-old-k8s-version-570369 is active
	I0719 19:34:34.497347   68995 main.go:141] libmachine: (old-k8s-version-570369) Getting domain xml...
	I0719 19:34:34.498147   68995 main.go:141] libmachine: (old-k8s-version-570369) Creating domain...
	I0719 19:34:35.772867   68995 main.go:141] libmachine: (old-k8s-version-570369) Waiting to get IP...
	I0719 19:34:35.773913   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:35.774431   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:35.774493   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:35.774403   69904 retry.go:31] will retry after 240.258158ms: waiting for machine to come up
	I0719 19:34:36.016002   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:36.016573   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:36.016602   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:36.016519   69904 retry.go:31] will retry after 386.689334ms: waiting for machine to come up
	I0719 19:34:36.405411   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:36.405914   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:36.405957   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:36.405885   69904 retry.go:31] will retry after 295.472664ms: waiting for machine to come up
	I0719 19:34:36.703587   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:36.704082   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:36.704113   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:36.704040   69904 retry.go:31] will retry after 436.936059ms: waiting for machine to come up
	I0719 19:34:34.233826   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:34:34.233873   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:37.351373   68617 crio.go:462] duration metric: took 1.334375789s to copy over tarball
	I0719 19:34:37.351439   68617 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 19:34:39.633138   68617 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.281647989s)
	I0719 19:34:39.633178   68617 crio.go:469] duration metric: took 2.281778927s to extract the tarball
	I0719 19:34:39.633187   68617 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 19:34:39.671219   68617 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:34:39.716854   68617 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 19:34:39.716878   68617 cache_images.go:84] Images are preloaded, skipping loading
	I0719 19:34:39.716885   68617 kubeadm.go:934] updating node { 192.168.61.107 8443 v1.30.3 crio true true} ...
	I0719 19:34:39.717014   68617 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-889918 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-889918 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 19:34:39.717090   68617 ssh_runner.go:195] Run: crio config
	I0719 19:34:39.764571   68617 cni.go:84] Creating CNI manager for ""
	I0719 19:34:39.764597   68617 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:34:39.764610   68617 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 19:34:39.764633   68617 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.107 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-889918 NodeName:embed-certs-889918 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 19:34:39.764796   68617 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-889918"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 19:34:39.764873   68617 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 19:34:39.774416   68617 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 19:34:39.774479   68617 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 19:34:39.783899   68617 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0719 19:34:39.799538   68617 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 19:34:39.815241   68617 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0719 19:34:39.831507   68617 ssh_runner.go:195] Run: grep 192.168.61.107	control-plane.minikube.internal$ /etc/hosts
	I0719 19:34:39.835145   68617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:34:39.846125   68617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:34:39.958886   68617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:34:39.975440   68617 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918 for IP: 192.168.61.107
	I0719 19:34:39.975469   68617 certs.go:194] generating shared ca certs ...
	I0719 19:34:39.975488   68617 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:34:39.975688   68617 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 19:34:39.975742   68617 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 19:34:39.975755   68617 certs.go:256] generating profile certs ...
	I0719 19:34:39.975891   68617 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/client.key
	I0719 19:34:39.975975   68617 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/apiserver.key.edbb2bee
	I0719 19:34:39.976027   68617 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/proxy-client.key
	I0719 19:34:39.976166   68617 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 19:34:39.976206   68617 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 19:34:39.976214   68617 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 19:34:39.976242   68617 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 19:34:39.976267   68617 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 19:34:39.976317   68617 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 19:34:39.976368   68617 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:34:39.976983   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 19:34:40.011234   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 19:34:40.057204   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 19:34:40.079930   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 19:34:40.102045   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0719 19:34:40.124479   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 19:34:40.153087   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 19:34:40.183427   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 19:34:40.206893   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 19:34:40.230582   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 19:34:40.254384   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 19:34:40.278454   68617 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 19:34:40.297538   68617 ssh_runner.go:195] Run: openssl version
	I0719 19:34:40.303579   68617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 19:34:40.315682   68617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 19:34:40.320283   68617 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 19:34:40.320356   68617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 19:34:40.326353   68617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 19:34:40.338564   68617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 19:34:40.349901   68617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:34:40.354229   68617 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:34:40.354291   68617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:34:40.359773   68617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 19:34:40.370212   68617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 19:34:40.383548   68617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 19:34:40.388929   68617 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 19:34:40.388981   68617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 19:34:40.396108   68617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 19:34:40.409065   68617 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 19:34:40.413154   68617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 19:34:40.418763   68617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 19:34:40.426033   68617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 19:34:40.431826   68617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 19:34:40.437469   68617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 19:34:40.443195   68617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 19:34:40.448932   68617 kubeadm.go:392] StartCluster: {Name:embed-certs-889918 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-889918 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:34:40.449086   68617 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 19:34:40.449153   68617 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:34:40.483766   68617 cri.go:89] found id: ""
	I0719 19:34:40.483867   68617 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 19:34:40.493593   68617 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 19:34:40.493618   68617 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 19:34:40.493668   68617 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 19:34:40.502750   68617 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 19:34:40.503917   68617 kubeconfig.go:125] found "embed-certs-889918" server: "https://192.168.61.107:8443"
	I0719 19:34:40.505701   68617 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 19:34:40.516036   68617 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.107
	I0719 19:34:40.516067   68617 kubeadm.go:1160] stopping kube-system containers ...
	I0719 19:34:40.516077   68617 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 19:34:40.516144   68617 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:34:40.562686   68617 cri.go:89] found id: ""
	I0719 19:34:40.562771   68617 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 19:34:40.578349   68617 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:34:40.587316   68617 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:34:40.587349   68617 kubeadm.go:157] found existing configuration files:
	
	I0719 19:34:40.587421   68617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:34:40.597112   68617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:34:40.597165   68617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:34:40.606941   68617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:34:40.616452   68617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:34:40.616495   68617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:34:40.626308   68617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:34:40.635642   68617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:34:40.635683   68617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:34:40.645499   68617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:34:40.655029   68617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:34:40.655080   68617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:34:40.665113   68617 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:34:40.673843   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:40.785511   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:37.142840   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:37.143334   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:37.143364   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:37.143287   69904 retry.go:31] will retry after 500.03346ms: waiting for machine to come up
	I0719 19:34:37.645270   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:37.645920   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:37.645949   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:37.645856   69904 retry.go:31] will retry after 803.488078ms: waiting for machine to come up
	I0719 19:34:38.450546   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:38.451043   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:38.451062   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:38.451000   69904 retry.go:31] will retry after 791.584627ms: waiting for machine to come up
	I0719 19:34:39.244154   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:39.244564   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:39.244597   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:39.244541   69904 retry.go:31] will retry after 1.090004215s: waiting for machine to come up
	I0719 19:34:40.335726   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:40.336278   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:40.336304   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:40.336240   69904 retry.go:31] will retry after 1.629231127s: waiting for machine to come up
	I0719 19:34:41.966947   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:41.967475   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:41.967497   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:41.967436   69904 retry.go:31] will retry after 2.06824211s: waiting for machine to come up
	I0719 19:34:39.234200   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:34:39.234245   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:41.592081   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:41.803419   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:41.898042   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:42.005198   68617 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:34:42.005290   68617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:42.505705   68617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:43.005532   68617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:43.505668   68617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:43.519708   68617 api_server.go:72] duration metric: took 1.514509053s to wait for apiserver process to appear ...
	I0719 19:34:43.519740   68617 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:34:43.519763   68617 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0719 19:34:46.288947   68617 api_server.go:279] https://192.168.61.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:34:46.288983   68617 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:34:46.289000   68617 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0719 19:34:46.306114   68617 api_server.go:279] https://192.168.61.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:34:46.306137   68617 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:34:44.038853   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:44.039330   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:44.039352   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:44.039303   69904 retry.go:31] will retry after 2.688374782s: waiting for machine to come up
	I0719 19:34:46.730865   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:46.731354   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:46.731377   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:46.731316   69904 retry.go:31] will retry after 3.054833322s: waiting for machine to come up
	I0719 19:34:46.519995   68617 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0719 19:34:46.524219   68617 api_server.go:279] https://192.168.61.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:34:46.524260   68617 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:34:47.020849   68617 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0719 19:34:47.029117   68617 api_server.go:279] https://192.168.61.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:34:47.029145   68617 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:34:47.520815   68617 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0719 19:34:47.525739   68617 api_server.go:279] https://192.168.61.107:8443/healthz returned 200:
	ok
	I0719 19:34:47.532240   68617 api_server.go:141] control plane version: v1.30.3
	I0719 19:34:47.532277   68617 api_server.go:131] duration metric: took 4.012528013s to wait for apiserver health ...
	I0719 19:34:47.532290   68617 cni.go:84] Creating CNI manager for ""
	I0719 19:34:47.532300   68617 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:34:47.533969   68617 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 19:34:44.235220   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:34:44.235292   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:44.372339   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": read tcp 192.168.72.1:52608->192.168.72.104:8444: read: connection reset by peer
	I0719 19:34:44.732722   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:44.733383   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": dial tcp 192.168.72.104:8444: connect: connection refused
	I0719 19:34:45.232769   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:47.535361   68617 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 19:34:47.545742   68617 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 19:34:47.570944   68617 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:34:47.582007   68617 system_pods.go:59] 8 kube-system pods found
	I0719 19:34:47.582046   68617 system_pods.go:61] "coredns-7db6d8ff4d-7h48w" [dd691a13-e571-4148-bef0-ba0552277312] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 19:34:47.582065   68617 system_pods.go:61] "etcd-embed-certs-889918" [68eae3c2-11de-4c70-bc28-1f34e1e868fc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 19:34:47.582075   68617 system_pods.go:61] "kube-apiserver-embed-certs-889918" [48f50846-bc9e-444e-a569-98d28550dcff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 19:34:47.582083   68617 system_pods.go:61] "kube-controller-manager-embed-certs-889918" [92e4da69-acef-44a2-9301-726c70294727] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 19:34:47.582091   68617 system_pods.go:61] "kube-proxy-5fcgs" [67ef345b-f2ef-4e18-a55c-91e2c08a37b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0719 19:34:47.582097   68617 system_pods.go:61] "kube-scheduler-embed-certs-889918" [3560778a-031a-486f-9f53-77c243737daf] Running
	I0719 19:34:47.582113   68617 system_pods.go:61] "metrics-server-569cc877fc-sfc85" [fc2a1efe-1728-42d9-bd3d-9eb29af783e9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:34:47.582121   68617 system_pods.go:61] "storage-provisioner" [32274055-6a2d-4cf9-8ae8-3c5f7cdc66db] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 19:34:47.582131   68617 system_pods.go:74] duration metric: took 11.164178ms to wait for pod list to return data ...
	I0719 19:34:47.582153   68617 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:34:47.588678   68617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:34:47.588713   68617 node_conditions.go:123] node cpu capacity is 2
	I0719 19:34:47.588732   68617 node_conditions.go:105] duration metric: took 6.572921ms to run NodePressure ...
	I0719 19:34:47.588751   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:47.869129   68617 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 19:34:47.875206   68617 kubeadm.go:739] kubelet initialised
	I0719 19:34:47.875228   68617 kubeadm.go:740] duration metric: took 6.075227ms waiting for restarted kubelet to initialise ...
	I0719 19:34:47.875239   68617 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:34:47.883454   68617 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:47.889101   68617 pod_ready.go:97] node "embed-certs-889918" hosting pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.889128   68617 pod_ready.go:81] duration metric: took 5.645208ms for pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:47.889149   68617 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-889918" hosting pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.889167   68617 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:47.894286   68617 pod_ready.go:97] node "embed-certs-889918" hosting pod "etcd-embed-certs-889918" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.894312   68617 pod_ready.go:81] duration metric: took 5.135667ms for pod "etcd-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:47.894321   68617 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-889918" hosting pod "etcd-embed-certs-889918" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.894327   68617 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:47.899401   68617 pod_ready.go:97] node "embed-certs-889918" hosting pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.899428   68617 pod_ready.go:81] duration metric: took 5.092368ms for pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:47.899435   68617 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-889918" hosting pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.899441   68617 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:47.976133   68617 pod_ready.go:97] node "embed-certs-889918" hosting pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.976170   68617 pod_ready.go:81] duration metric: took 76.718663ms for pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:47.976183   68617 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-889918" hosting pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.976193   68617 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5fcgs" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:48.374196   68617 pod_ready.go:97] node "embed-certs-889918" hosting pod "kube-proxy-5fcgs" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:48.374232   68617 pod_ready.go:81] duration metric: took 398.025945ms for pod "kube-proxy-5fcgs" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:48.374242   68617 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-889918" hosting pod "kube-proxy-5fcgs" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:48.374250   68617 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:48.574541   68617 pod_ready.go:97] error getting pod "kube-scheduler-embed-certs-889918" in "kube-system" namespace (skipping!): pods "kube-scheduler-embed-certs-889918" not found
	I0719 19:34:48.574575   68617 pod_ready.go:81] duration metric: took 200.319358ms for pod "kube-scheduler-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:48.574585   68617 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-embed-certs-889918" in "kube-system" namespace (skipping!): pods "kube-scheduler-embed-certs-889918" not found
	I0719 19:34:48.574591   68617 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:48.974614   68617 pod_ready.go:97] node "embed-certs-889918" hosting pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:48.974640   68617 pod_ready.go:81] duration metric: took 400.042539ms for pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:48.974648   68617 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-889918" hosting pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:48.974655   68617 pod_ready.go:38] duration metric: took 1.099407444s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:34:48.974672   68617 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 19:34:48.985044   68617 ops.go:34] apiserver oom_adj: -16
	I0719 19:34:48.985072   68617 kubeadm.go:597] duration metric: took 8.491446525s to restartPrimaryControlPlane
	I0719 19:34:48.985084   68617 kubeadm.go:394] duration metric: took 8.536160393s to StartCluster
	I0719 19:34:48.985117   68617 settings.go:142] acquiring lock: {Name:mkcf95271f2ac91a55c2f4ae0f8a0ccaa24777be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:34:48.985216   68617 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:34:48.986883   68617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/kubeconfig: {Name:mk0bbb5f0de0e816a151363ad4427bbf9de52f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:34:48.987110   68617 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 19:34:48.987179   68617 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 19:34:48.987252   68617 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-889918"
	I0719 19:34:48.987271   68617 addons.go:69] Setting default-storageclass=true in profile "embed-certs-889918"
	I0719 19:34:48.987287   68617 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-889918"
	W0719 19:34:48.987297   68617 addons.go:243] addon storage-provisioner should already be in state true
	I0719 19:34:48.987303   68617 addons.go:69] Setting metrics-server=true in profile "embed-certs-889918"
	I0719 19:34:48.987319   68617 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-889918"
	I0719 19:34:48.987331   68617 config.go:182] Loaded profile config "embed-certs-889918": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:34:48.987352   68617 host.go:66] Checking if "embed-certs-889918" exists ...
	I0719 19:34:48.987335   68617 addons.go:234] Setting addon metrics-server=true in "embed-certs-889918"
	W0719 19:34:48.987368   68617 addons.go:243] addon metrics-server should already be in state true
	I0719 19:34:48.987402   68617 host.go:66] Checking if "embed-certs-889918" exists ...
	I0719 19:34:48.987647   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:48.987684   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:48.987692   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:48.987701   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:48.987720   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:48.987723   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:48.988905   68617 out.go:177] * Verifying Kubernetes components...
	I0719 19:34:48.990196   68617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:34:49.003003   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38653
	I0719 19:34:49.003016   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46849
	I0719 19:34:49.003134   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38559
	I0719 19:34:49.003399   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.003454   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.003518   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.003947   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.003967   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.004007   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.004023   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.004100   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.004115   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.004295   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.004351   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.004444   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.004512   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetState
	I0719 19:34:49.004785   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:49.004819   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:49.005005   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:49.005037   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:49.008189   68617 addons.go:234] Setting addon default-storageclass=true in "embed-certs-889918"
	W0719 19:34:49.008212   68617 addons.go:243] addon default-storageclass should already be in state true
	I0719 19:34:49.008242   68617 host.go:66] Checking if "embed-certs-889918" exists ...
	I0719 19:34:49.008617   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:49.008649   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:49.020066   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38847
	I0719 19:34:49.020085   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40037
	I0719 19:34:49.020445   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.020502   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.020882   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.020899   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.021021   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.021049   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.021278   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.021313   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.021473   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetState
	I0719 19:34:49.021511   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetState
	I0719 19:34:49.023274   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:49.023416   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:49.025350   68617 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:34:49.025361   68617 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0719 19:34:49.025824   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32781
	I0719 19:34:49.026239   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.026584   68617 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 19:34:49.026603   68617 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 19:34:49.026621   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:49.026679   68617 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:34:49.026696   68617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 19:34:49.026712   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:49.027527   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.027544   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.027893   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.028407   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:49.028439   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:49.030553   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:49.030579   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:49.031224   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:49.031246   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:49.031274   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:49.031279   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:49.031298   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:49.031375   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:49.031477   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:49.031659   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:49.031717   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:49.031867   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:49.032146   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:49.032524   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:49.043850   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37515
	I0719 19:34:49.044198   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.044657   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.044679   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.045019   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.045222   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetState
	I0719 19:34:49.046751   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:49.046979   68617 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 19:34:49.046997   68617 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 19:34:49.047014   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:49.049718   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:49.050184   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:49.050210   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:49.050371   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:49.050520   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:49.050670   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:49.050852   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:49.167388   68617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:34:49.183694   68617 node_ready.go:35] waiting up to 6m0s for node "embed-certs-889918" to be "Ready" ...
	I0719 19:34:49.290895   68617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:34:49.298027   68617 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 19:34:49.298047   68617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0719 19:34:49.313881   68617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 19:34:49.319432   68617 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 19:34:49.319459   68617 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 19:34:49.357567   68617 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 19:34:49.357591   68617 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 19:34:49.385245   68617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 19:34:50.267101   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.267126   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.267145   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.267136   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.267209   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.267226   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.267407   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.267419   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.267427   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.267434   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.267506   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Closing plugin on server side
	I0719 19:34:50.267527   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Closing plugin on server side
	I0719 19:34:50.267546   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.267553   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.267561   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.267567   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.267576   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.267601   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.267619   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.267630   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.267630   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Closing plugin on server side
	I0719 19:34:50.267652   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.267660   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.267890   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Closing plugin on server side
	I0719 19:34:50.267920   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.267933   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.267945   68617 addons.go:475] Verifying addon metrics-server=true in "embed-certs-889918"
	I0719 19:34:50.268025   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Closing plugin on server side
	I0719 19:34:50.268046   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.268053   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.273372   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.273398   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.273607   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.273621   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.275547   68617 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0719 19:34:50.276813   68617 addons.go:510] duration metric: took 1.289644945s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0719 19:34:51.187125   68617 node_ready.go:53] node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:49.787569   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:49.788100   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:49.788136   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:49.788058   69904 retry.go:31] will retry after 4.206772692s: waiting for machine to come up
	I0719 19:34:50.233851   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:34:50.233900   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:55.468495   68019 start.go:364] duration metric: took 55.077199048s to acquireMachinesLock for "no-preload-971041"
	I0719 19:34:55.468549   68019 start.go:96] Skipping create...Using existing machine configuration
	I0719 19:34:55.468561   68019 fix.go:54] fixHost starting: 
	I0719 19:34:55.469011   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:55.469045   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:55.488854   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32977
	I0719 19:34:55.489237   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:55.489746   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:34:55.489769   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:55.490164   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:55.490353   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:34:55.490535   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetState
	I0719 19:34:55.492080   68019 fix.go:112] recreateIfNeeded on no-preload-971041: state=Stopped err=<nil>
	I0719 19:34:55.492100   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	W0719 19:34:55.492240   68019 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 19:34:55.494051   68019 out.go:177] * Restarting existing kvm2 VM for "no-preload-971041" ...
	I0719 19:34:53.189961   68617 node_ready.go:53] node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:55.688298   68617 node_ready.go:53] node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:53.999402   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.000105   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has current primary IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.000129   68995 main.go:141] libmachine: (old-k8s-version-570369) Found IP for machine: 192.168.39.220
	I0719 19:34:54.000144   68995 main.go:141] libmachine: (old-k8s-version-570369) Reserving static IP address...
	I0719 19:34:54.000535   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "old-k8s-version-570369", mac: "52:54:00:80:e3:2b", ip: "192.168.39.220"} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.000738   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | skip adding static IP to network mk-old-k8s-version-570369 - found existing host DHCP lease matching {name: "old-k8s-version-570369", mac: "52:54:00:80:e3:2b", ip: "192.168.39.220"}
	I0719 19:34:54.000786   68995 main.go:141] libmachine: (old-k8s-version-570369) Reserved static IP address: 192.168.39.220
	I0719 19:34:54.000799   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | Getting to WaitForSSH function...
	I0719 19:34:54.000819   68995 main.go:141] libmachine: (old-k8s-version-570369) Waiting for SSH to be available...
	I0719 19:34:54.003082   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.003489   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.003519   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.003700   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | Using SSH client type: external
	I0719 19:34:54.003733   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | Using SSH private key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa (-rw-------)
	I0719 19:34:54.003769   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.220 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 19:34:54.003786   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | About to run SSH command:
	I0719 19:34:54.003802   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | exit 0
	I0719 19:34:54.131809   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | SSH cmd err, output: <nil>: 
	I0719 19:34:54.132235   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetConfigRaw
	I0719 19:34:54.132946   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetIP
	I0719 19:34:54.135584   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.135977   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.136011   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.136256   68995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/config.json ...
	I0719 19:34:54.136443   68995 machine.go:94] provisionDockerMachine start ...
	I0719 19:34:54.136465   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:54.136815   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.139347   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.139705   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.139733   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.139951   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:54.140132   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.140272   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.140567   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:54.140754   68995 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:54.141006   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:34:54.141018   68995 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 19:34:54.251997   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 19:34:54.252035   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetMachineName
	I0719 19:34:54.252283   68995 buildroot.go:166] provisioning hostname "old-k8s-version-570369"
	I0719 19:34:54.252310   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetMachineName
	I0719 19:34:54.252568   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.255414   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.255812   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.255860   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.256041   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:54.256232   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.256399   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.256586   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:54.256793   68995 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:54.257004   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:34:54.257019   68995 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-570369 && echo "old-k8s-version-570369" | sudo tee /etc/hostname
	I0719 19:34:54.381363   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-570369
	
	I0719 19:34:54.381405   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.384406   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.384697   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.384724   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.384866   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:54.385050   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.385226   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.385399   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:54.385554   68995 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:54.385735   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:34:54.385752   68995 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-570369' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-570369/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-570369' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 19:34:54.504713   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:34:54.504740   68995 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 19:34:54.504778   68995 buildroot.go:174] setting up certificates
	I0719 19:34:54.504786   68995 provision.go:84] configureAuth start
	I0719 19:34:54.504796   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetMachineName
	I0719 19:34:54.505040   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetIP
	I0719 19:34:54.507978   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.508375   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.508407   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.508594   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.510609   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.510923   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.510952   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.511003   68995 provision.go:143] copyHostCerts
	I0719 19:34:54.511068   68995 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 19:34:54.511079   68995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 19:34:54.511147   68995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 19:34:54.511250   68995 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 19:34:54.511260   68995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 19:34:54.511290   68995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 19:34:54.511365   68995 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 19:34:54.511374   68995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 19:34:54.511404   68995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 19:34:54.511470   68995 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-570369 san=[127.0.0.1 192.168.39.220 localhost minikube old-k8s-version-570369]
	I0719 19:34:54.759014   68995 provision.go:177] copyRemoteCerts
	I0719 19:34:54.759064   68995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 19:34:54.759089   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.761770   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.762089   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.762133   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.762262   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:54.762483   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.762633   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:54.762744   68995 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa Username:docker}
	I0719 19:34:54.849819   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0719 19:34:54.874113   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 19:34:54.897979   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 19:34:54.921635   68995 provision.go:87] duration metric: took 416.834667ms to configureAuth
	I0719 19:34:54.921669   68995 buildroot.go:189] setting minikube options for container-runtime
	I0719 19:34:54.921904   68995 config.go:182] Loaded profile config "old-k8s-version-570369": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0719 19:34:54.922004   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.924798   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.925091   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.925121   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.925308   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:54.925611   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.925801   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.925980   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:54.926180   68995 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:54.926442   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:34:54.926471   68995 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 19:34:55.219746   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 19:34:55.219774   68995 machine.go:97] duration metric: took 1.083316626s to provisionDockerMachine
	I0719 19:34:55.219788   68995 start.go:293] postStartSetup for "old-k8s-version-570369" (driver="kvm2")
	I0719 19:34:55.219802   68995 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 19:34:55.219823   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:55.220207   68995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 19:34:55.220250   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:55.223470   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.223803   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:55.223847   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.224015   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:55.224253   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:55.224507   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:55.224686   68995 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa Username:docker}
	I0719 19:34:55.314430   68995 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 19:34:55.318435   68995 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 19:34:55.318461   68995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 19:34:55.318573   68995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 19:34:55.318696   68995 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 19:34:55.318818   68995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 19:34:55.329180   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:34:55.351452   68995 start.go:296] duration metric: took 131.650561ms for postStartSetup
	I0719 19:34:55.351493   68995 fix.go:56] duration metric: took 20.882796261s for fixHost
	I0719 19:34:55.351516   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:55.354069   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.354429   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:55.354449   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.354612   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:55.354786   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:55.354936   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:55.355083   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:55.355295   68995 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:55.355516   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:34:55.355528   68995 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 19:34:55.468304   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721417695.444458011
	
	I0719 19:34:55.468347   68995 fix.go:216] guest clock: 1721417695.444458011
	I0719 19:34:55.468357   68995 fix.go:229] Guest: 2024-07-19 19:34:55.444458011 +0000 UTC Remote: 2024-07-19 19:34:55.351498388 +0000 UTC m=+233.383860409 (delta=92.959623ms)
	I0719 19:34:55.468400   68995 fix.go:200] guest clock delta is within tolerance: 92.959623ms
	I0719 19:34:55.468407   68995 start.go:83] releasing machines lock for "old-k8s-version-570369", held for 20.999740081s
	I0719 19:34:55.468436   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:55.468696   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetIP
	I0719 19:34:55.471650   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.472087   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:55.472126   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.472376   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:55.472870   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:55.473063   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:55.473145   68995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 19:34:55.473190   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:55.473364   68995 ssh_runner.go:195] Run: cat /version.json
	I0719 19:34:55.473392   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:55.476321   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.476682   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.476716   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:55.476736   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.476847   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:55.477024   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:55.477123   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:55.477149   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.477183   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:55.477291   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:55.477341   68995 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa Username:docker}
	I0719 19:34:55.477440   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:55.477555   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:55.477729   68995 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa Username:docker}
	I0719 19:34:55.573427   68995 ssh_runner.go:195] Run: systemctl --version
	I0719 19:34:55.602019   68995 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 19:34:55.759356   68995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 19:34:55.766310   68995 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 19:34:55.766406   68995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 19:34:55.786237   68995 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 19:34:55.786261   68995 start.go:495] detecting cgroup driver to use...
	I0719 19:34:55.786335   68995 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 19:34:55.804512   68995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 19:34:55.818026   68995 docker.go:217] disabling cri-docker service (if available) ...
	I0719 19:34:55.818081   68995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 19:34:55.832737   68995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 19:34:55.848018   68995 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 19:34:55.972092   68995 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 19:34:56.119305   68995 docker.go:233] disabling docker service ...
	I0719 19:34:56.119390   68995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 19:34:56.132814   68995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 19:34:56.145281   68995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 19:34:56.285671   68995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 19:34:56.413786   68995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 19:34:56.428220   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 19:34:56.447626   68995 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0719 19:34:56.447679   68995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:56.458325   68995 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 19:34:56.458419   68995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:56.467977   68995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:56.477127   68995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:56.487342   68995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 19:34:56.499618   68995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 19:34:56.510085   68995 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 19:34:56.510149   68995 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 19:34:56.524045   68995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 19:34:56.533806   68995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:34:56.671207   68995 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 19:34:56.844623   68995 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 19:34:56.844688   68995 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 19:34:56.849762   68995 start.go:563] Will wait 60s for crictl version
	I0719 19:34:56.849829   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:34:56.853768   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 19:34:56.896644   68995 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 19:34:56.896768   68995 ssh_runner.go:195] Run: crio --version
	I0719 19:34:56.924343   68995 ssh_runner.go:195] Run: crio --version
	I0719 19:34:56.952939   68995 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0719 19:34:56.954228   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetIP
	I0719 19:34:56.957173   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:56.957539   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:56.957571   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:56.957784   68995 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 19:34:56.961877   68995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:34:56.973917   68995 kubeadm.go:883] updating cluster {Name:old-k8s-version-570369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-570369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 19:34:56.974058   68995 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 19:34:56.974286   68995 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:34:55.495295   68019 main.go:141] libmachine: (no-preload-971041) Calling .Start
	I0719 19:34:55.495475   68019 main.go:141] libmachine: (no-preload-971041) Ensuring networks are active...
	I0719 19:34:55.496223   68019 main.go:141] libmachine: (no-preload-971041) Ensuring network default is active
	I0719 19:34:55.496554   68019 main.go:141] libmachine: (no-preload-971041) Ensuring network mk-no-preload-971041 is active
	I0719 19:34:55.496971   68019 main.go:141] libmachine: (no-preload-971041) Getting domain xml...
	I0719 19:34:55.497761   68019 main.go:141] libmachine: (no-preload-971041) Creating domain...
	I0719 19:34:56.870332   68019 main.go:141] libmachine: (no-preload-971041) Waiting to get IP...
	I0719 19:34:56.871429   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:56.871920   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:56.871979   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:56.871888   70115 retry.go:31] will retry after 201.799148ms: waiting for machine to come up
	I0719 19:34:57.075216   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:57.075744   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:57.075774   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:57.075692   70115 retry.go:31] will retry after 311.790253ms: waiting for machine to come up
	I0719 19:34:57.389280   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:57.389811   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:57.389836   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:57.389752   70115 retry.go:31] will retry after 434.42849ms: waiting for machine to come up
	I0719 19:34:57.826106   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:57.826558   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:57.826591   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:57.826524   70115 retry.go:31] will retry after 416.820957ms: waiting for machine to come up
	I0719 19:34:55.234692   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:34:55.234729   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:56.690525   68617 node_ready.go:49] node "embed-certs-889918" has status "Ready":"True"
	I0719 19:34:56.690548   68617 node_ready.go:38] duration metric: took 7.506818962s for node "embed-certs-889918" to be "Ready" ...
	I0719 19:34:56.690557   68617 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:34:56.697237   68617 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:56.702754   68617 pod_ready.go:92] pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace has status "Ready":"True"
	I0719 19:34:56.702775   68617 pod_ready.go:81] duration metric: took 5.512702ms for pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:56.702784   68617 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:58.709004   68617 pod_ready.go:102] pod "etcd-embed-certs-889918" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:00.709279   68617 pod_ready.go:92] pod "etcd-embed-certs-889918" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:00.709305   68617 pod_ready.go:81] duration metric: took 4.00651453s for pod "etcd-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.709317   68617 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.715390   68617 pod_ready.go:92] pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:00.715416   68617 pod_ready.go:81] duration metric: took 6.089348ms for pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.715428   68617 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.720718   68617 pod_ready.go:92] pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:00.720754   68617 pod_ready.go:81] duration metric: took 5.316049ms for pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.720768   68617 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5fcgs" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.725631   68617 pod_ready.go:92] pod "kube-proxy-5fcgs" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:00.725660   68617 pod_ready.go:81] duration metric: took 4.884075ms for pod "kube-proxy-5fcgs" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.725671   68617 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:57.028362   68995 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 19:34:57.028462   68995 ssh_runner.go:195] Run: which lz4
	I0719 19:34:57.032617   68995 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 19:34:57.037102   68995 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 19:34:57.037142   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0719 19:34:58.509147   68995 crio.go:462] duration metric: took 1.476562763s to copy over tarball
	I0719 19:34:58.509223   68995 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 19:35:01.576920   68995 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.067667619s)
	I0719 19:35:01.576949   68995 crio.go:469] duration metric: took 3.067772722s to extract the tarball
	I0719 19:35:01.576959   68995 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 19:35:01.622394   68995 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:35:01.658103   68995 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 19:35:01.658135   68995 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 19:35:01.658233   68995 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:01.658257   68995 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:35:01.658262   68995 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:35:01.658236   68995 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0719 19:35:01.658235   68995 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0719 19:35:01.658233   68995 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:35:01.658247   68995 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:35:01.658433   68995 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0719 19:35:01.660336   68995 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0719 19:35:01.660361   68995 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:35:01.660343   68995 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:35:01.660333   68995 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:35:01.660339   68995 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0719 19:35:01.660340   68995 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:35:01.660333   68995 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:01.660714   68995 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0719 19:35:01.895704   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0719 19:35:01.907901   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:35:01.908071   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:35:01.923590   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0719 19:35:01.923831   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:35:01.955327   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:35:01.983003   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0719 19:35:00.235814   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:35:00.235880   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:35:00.893649   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:35:00.893683   68536 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:35:00.893697   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:35:00.916935   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:35:00.916965   68536 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:35:01.233380   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:35:01.242031   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:35:01.242062   68536 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:35:01.732567   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:35:01.737153   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:35:01.737201   68536 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:35:02.232697   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:35:02.238729   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:35:02.238763   68536 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:35:02.733474   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:35:02.738862   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 200:
	ok
	I0719 19:35:02.746708   68536 api_server.go:141] control plane version: v1.30.3
	I0719 19:35:02.746736   68536 api_server.go:131] duration metric: took 38.514241141s to wait for apiserver health ...
	I0719 19:35:02.746748   68536 cni.go:84] Creating CNI manager for ""
	I0719 19:35:02.746757   68536 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:34:58.245957   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:58.246426   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:58.246451   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:58.246364   70115 retry.go:31] will retry after 682.234308ms: waiting for machine to come up
	I0719 19:34:58.930013   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:58.930623   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:58.930642   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:58.930578   70115 retry.go:31] will retry after 798.20186ms: waiting for machine to come up
	I0719 19:34:59.730628   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:59.731191   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:59.731213   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:59.731139   70115 retry.go:31] will retry after 911.96578ms: waiting for machine to come up
	I0719 19:35:00.644805   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:00.645291   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:35:00.645326   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:35:00.645245   70115 retry.go:31] will retry after 1.252978155s: waiting for machine to come up
	I0719 19:35:01.900101   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:01.900824   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:35:01.900848   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:35:01.900727   70115 retry.go:31] will retry after 1.662309997s: waiting for machine to come up
	I0719 19:35:02.966644   68536 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 19:35:03.080565   68536 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 19:35:03.099289   68536 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 19:35:03.123741   68536 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:35:03.243633   68536 system_pods.go:59] 8 kube-system pods found
	I0719 19:35:03.243729   68536 system_pods.go:61] "coredns-7db6d8ff4d-tngnj" [e343f406-a7bc-46e8-986a-a55d3579c8d2] Running
	I0719 19:35:03.243751   68536 system_pods.go:61] "etcd-default-k8s-diff-port-578533" [f800aab8-97bf-4080-b2ec-bac5e070cd14] Running
	I0719 19:35:03.243767   68536 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-578533" [afe884c0-eee2-4c52-b01f-c106afb9a244] Running
	I0719 19:35:03.243795   68536 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-578533" [ecece9ad-d0ed-4cad-b01e-d7254df00dfa] Running
	I0719 19:35:03.243816   68536 system_pods.go:61] "kube-proxy-cdb66" [1daaaaf5-4c7d-4c37-86f7-b52d11621ffa] Running
	I0719 19:35:03.243850   68536 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-578533" [0359836d-2099-43f7-9dae-7c54755818d4] Running
	I0719 19:35:03.243874   68536 system_pods.go:61] "metrics-server-569cc877fc-wf82d" [8ae7709a-082d-4279-9d78-44940b2bdae4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:35:03.243889   68536 system_pods.go:61] "storage-provisioner" [767f30f2-20d0-4340-b70b-0570d42fa007] Running
	I0719 19:35:03.243909   68536 system_pods.go:74] duration metric: took 120.145691ms to wait for pod list to return data ...
	I0719 19:35:03.243942   68536 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:35:02.733443   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:04.734486   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:02.004356   68995 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0719 19:35:02.004410   68995 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0719 19:35:02.004451   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.076774   68995 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0719 19:35:02.076820   68995 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:35:02.076871   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.077195   68995 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0719 19:35:02.077230   68995 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:35:02.077274   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.094076   68995 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0719 19:35:02.094105   68995 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0719 19:35:02.094118   68995 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:35:02.094148   68995 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0719 19:35:02.094169   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.094186   68995 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:35:02.094156   68995 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0719 19:35:02.094233   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.094246   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.100381   68995 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0719 19:35:02.100411   68995 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0719 19:35:02.100416   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0719 19:35:02.100449   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.100545   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:35:02.100547   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:35:02.106653   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:35:02.106700   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0719 19:35:02.106781   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:35:02.214163   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0719 19:35:02.214249   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0719 19:35:02.216256   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0719 19:35:02.216348   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0719 19:35:02.247645   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0719 19:35:02.247723   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0719 19:35:02.247818   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0719 19:35:02.265494   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0719 19:35:02.465885   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:02.607013   68995 cache_images.go:92] duration metric: took 948.858667ms to LoadCachedImages
	W0719 19:35:02.607143   68995 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0719 19:35:02.607167   68995 kubeadm.go:934] updating node { 192.168.39.220 8443 v1.20.0 crio true true} ...
	I0719 19:35:02.607303   68995 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-570369 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 19:35:02.607399   68995 ssh_runner.go:195] Run: crio config
	I0719 19:35:02.661274   68995 cni.go:84] Creating CNI manager for ""
	I0719 19:35:02.661306   68995 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:35:02.661326   68995 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 19:35:02.661353   68995 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.220 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-570369 NodeName:old-k8s-version-570369 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.220"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.220 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0719 19:35:02.661525   68995 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.220
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-570369"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.220
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.220"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 19:35:02.661599   68995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0719 19:35:02.671792   68995 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 19:35:02.671878   68995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 19:35:02.683593   68995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0719 19:35:02.703448   68995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 19:35:02.723494   68995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0719 19:35:02.741880   68995 ssh_runner.go:195] Run: grep 192.168.39.220	control-plane.minikube.internal$ /etc/hosts
	I0719 19:35:02.745968   68995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.220	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:35:02.759015   68995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:35:02.894896   68995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:35:02.912243   68995 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369 for IP: 192.168.39.220
	I0719 19:35:02.912272   68995 certs.go:194] generating shared ca certs ...
	I0719 19:35:02.912295   68995 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:35:02.912468   68995 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 19:35:02.912520   68995 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 19:35:02.912532   68995 certs.go:256] generating profile certs ...
	I0719 19:35:02.912660   68995 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/client.key
	I0719 19:35:02.912732   68995 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.key.41bb110d
	I0719 19:35:02.912784   68995 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/proxy-client.key
	I0719 19:35:02.912919   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 19:35:02.912959   68995 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 19:35:02.912973   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 19:35:02.913006   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 19:35:02.913040   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 19:35:02.913070   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 19:35:02.913125   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:35:02.914001   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 19:35:02.962668   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 19:35:02.997316   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 19:35:03.033868   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 19:35:03.073913   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0719 19:35:03.120607   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 19:35:03.156647   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 19:35:03.196763   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 19:35:03.233170   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 19:35:03.260949   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 19:35:03.286207   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 19:35:03.312158   68995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 19:35:03.333544   68995 ssh_runner.go:195] Run: openssl version
	I0719 19:35:03.339422   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 19:35:03.353775   68995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 19:35:03.359494   68995 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 19:35:03.359597   68995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 19:35:03.365926   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 19:35:03.380164   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 19:35:03.394069   68995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 19:35:03.398358   68995 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 19:35:03.398410   68995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 19:35:03.404158   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 19:35:03.415159   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 19:35:03.425887   68995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:35:03.430584   68995 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:35:03.430641   68995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:35:03.436201   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 19:35:03.448497   68995 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 19:35:03.454026   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 19:35:03.461326   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 19:35:03.467492   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 19:35:03.473959   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 19:35:03.480337   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 19:35:03.487652   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 19:35:03.494941   68995 kubeadm.go:392] StartCluster: {Name:old-k8s-version-570369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-570369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:35:03.495049   68995 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 19:35:03.495095   68995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:35:03.539527   68995 cri.go:89] found id: ""
	I0719 19:35:03.539603   68995 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 19:35:03.550212   68995 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 19:35:03.550232   68995 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 19:35:03.550290   68995 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 19:35:03.562250   68995 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 19:35:03.563624   68995 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-570369" does not appear in /home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:35:03.564665   68995 kubeconfig.go:62] /home/jenkins/minikube-integration/19307-14841/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-570369" cluster setting kubeconfig missing "old-k8s-version-570369" context setting]
	I0719 19:35:03.566027   68995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/kubeconfig: {Name:mk0bbb5f0de0e816a151363ad4427bbf9de52f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:35:03.639264   68995 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 19:35:03.651651   68995 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.220
	I0719 19:35:03.651684   68995 kubeadm.go:1160] stopping kube-system containers ...
	I0719 19:35:03.651698   68995 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 19:35:03.651753   68995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:35:03.694507   68995 cri.go:89] found id: ""
	I0719 19:35:03.694575   68995 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 19:35:03.713478   68995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:35:03.724717   68995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:35:03.724744   68995 kubeadm.go:157] found existing configuration files:
	
	I0719 19:35:03.724803   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:35:03.736032   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:35:03.736099   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:35:03.746972   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:35:03.757246   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:35:03.757384   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:35:03.766980   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:35:03.776428   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:35:03.776493   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:35:03.788238   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:35:03.799163   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:35:03.799230   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:35:03.810607   68995 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:35:03.820069   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:04.035635   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:04.678453   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:04.924967   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:05.021723   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:05.118353   68995 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:35:05.118439   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:05.618699   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:06.119206   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:06.618828   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:03.565550   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:03.566079   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:35:03.566106   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:35:03.566042   70115 retry.go:31] will retry after 1.551544013s: waiting for machine to come up
	I0719 19:35:05.119602   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:05.120126   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:35:05.120147   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:35:05.120073   70115 retry.go:31] will retry after 2.205398899s: waiting for machine to come up
	I0719 19:35:07.327911   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:07.328428   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:35:07.328451   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:35:07.328390   70115 retry.go:31] will retry after 3.255187102s: waiting for machine to come up
	I0719 19:35:03.641957   68536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:35:03.641987   68536 node_conditions.go:123] node cpu capacity is 2
	I0719 19:35:03.642001   68536 node_conditions.go:105] duration metric: took 398.042486ms to run NodePressure ...
	I0719 19:35:03.642020   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:04.658145   68536 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.016096931s)
	I0719 19:35:04.658182   68536 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 19:35:04.664125   68536 retry.go:31] will retry after 336.601067ms: kubelet not initialised
	I0719 19:35:05.007823   68536 retry.go:31] will retry after 366.441912ms: kubelet not initialised
	I0719 19:35:05.379510   68536 retry.go:31] will retry after 463.132181ms: kubelet not initialised
	I0719 19:35:05.848603   68536 retry.go:31] will retry after 1.07509848s: kubelet not initialised
	I0719 19:35:06.931674   68536 kubeadm.go:739] kubelet initialised
	I0719 19:35:06.931699   68536 kubeadm.go:740] duration metric: took 2.273506864s waiting for restarted kubelet to initialise ...
	I0719 19:35:06.931706   68536 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:35:06.941748   68536 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-tngnj" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:07.231890   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:09.732362   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:07.119022   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:07.619075   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:08.119512   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:08.618875   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:09.118565   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:09.619032   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:10.118774   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:10.618802   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:11.119561   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:11.618914   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:10.585455   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:10.585919   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:35:10.585946   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:35:10.585867   70115 retry.go:31] will retry after 3.911827583s: waiting for machine to come up
	I0719 19:35:08.947740   68536 pod_ready.go:102] pod "coredns-7db6d8ff4d-tngnj" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:10.959978   68536 pod_ready.go:102] pod "coredns-7db6d8ff4d-tngnj" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:12.449422   68536 pod_ready.go:92] pod "coredns-7db6d8ff4d-tngnj" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:12.449445   68536 pod_ready.go:81] duration metric: took 5.507672716s for pod "coredns-7db6d8ff4d-tngnj" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:12.449459   68536 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:12.455485   68536 pod_ready.go:92] pod "etcd-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:12.455513   68536 pod_ready.go:81] duration metric: took 6.046494ms for pod "etcd-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:12.455524   68536 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:12.461238   68536 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:12.461259   68536 pod_ready.go:81] duration metric: took 5.727166ms for pod "kube-apiserver-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:12.461268   68536 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:12.230775   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:14.232074   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:16.232335   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:12.119049   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:12.619110   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:13.119295   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:13.619413   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:14.118909   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:14.619324   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:15.118964   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:15.618534   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:16.119209   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:16.619370   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:14.500920   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.501377   68019 main.go:141] libmachine: (no-preload-971041) Found IP for machine: 192.168.50.119
	I0719 19:35:14.501409   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has current primary IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.501420   68019 main.go:141] libmachine: (no-preload-971041) Reserving static IP address...
	I0719 19:35:14.501837   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "no-preload-971041", mac: "52:54:00:72:b8:bd", ip: "192.168.50.119"} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.501869   68019 main.go:141] libmachine: (no-preload-971041) Reserved static IP address: 192.168.50.119
	I0719 19:35:14.501888   68019 main.go:141] libmachine: (no-preload-971041) DBG | skip adding static IP to network mk-no-preload-971041 - found existing host DHCP lease matching {name: "no-preload-971041", mac: "52:54:00:72:b8:bd", ip: "192.168.50.119"}
	I0719 19:35:14.501907   68019 main.go:141] libmachine: (no-preload-971041) DBG | Getting to WaitForSSH function...
	I0719 19:35:14.501923   68019 main.go:141] libmachine: (no-preload-971041) Waiting for SSH to be available...
	I0719 19:35:14.504106   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.504417   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.504451   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.504563   68019 main.go:141] libmachine: (no-preload-971041) DBG | Using SSH client type: external
	I0719 19:35:14.504609   68019 main.go:141] libmachine: (no-preload-971041) DBG | Using SSH private key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa (-rw-------)
	I0719 19:35:14.504650   68019 main.go:141] libmachine: (no-preload-971041) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.119 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 19:35:14.504691   68019 main.go:141] libmachine: (no-preload-971041) DBG | About to run SSH command:
	I0719 19:35:14.504710   68019 main.go:141] libmachine: (no-preload-971041) DBG | exit 0
	I0719 19:35:14.624083   68019 main.go:141] libmachine: (no-preload-971041) DBG | SSH cmd err, output: <nil>: 
	I0719 19:35:14.624404   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetConfigRaw
	I0719 19:35:14.625152   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetIP
	I0719 19:35:14.628074   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.628431   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.628460   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.628756   68019 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/config.json ...
	I0719 19:35:14.628949   68019 machine.go:94] provisionDockerMachine start ...
	I0719 19:35:14.628967   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:35:14.629217   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:14.631862   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.632252   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.632280   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.632505   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:14.632703   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:14.632878   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:14.633052   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:14.633256   68019 main.go:141] libmachine: Using SSH client type: native
	I0719 19:35:14.633484   68019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.119 22 <nil> <nil>}
	I0719 19:35:14.633500   68019 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 19:35:14.732493   68019 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 19:35:14.732533   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetMachineName
	I0719 19:35:14.732918   68019 buildroot.go:166] provisioning hostname "no-preload-971041"
	I0719 19:35:14.732943   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetMachineName
	I0719 19:35:14.733124   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:14.735603   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.735962   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.735998   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.736203   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:14.736432   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:14.736617   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:14.736847   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:14.737046   68019 main.go:141] libmachine: Using SSH client type: native
	I0719 19:35:14.737264   68019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.119 22 <nil> <nil>}
	I0719 19:35:14.737280   68019 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-971041 && echo "no-preload-971041" | sudo tee /etc/hostname
	I0719 19:35:14.850040   68019 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-971041
	
	I0719 19:35:14.850066   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:14.853200   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.853584   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.853618   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.854242   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:14.854505   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:14.854720   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:14.854889   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:14.855089   68019 main.go:141] libmachine: Using SSH client type: native
	I0719 19:35:14.855321   68019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.119 22 <nil> <nil>}
	I0719 19:35:14.855344   68019 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-971041' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-971041/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-971041' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 19:35:14.959910   68019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:35:14.959942   68019 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 19:35:14.959969   68019 buildroot.go:174] setting up certificates
	I0719 19:35:14.959985   68019 provision.go:84] configureAuth start
	I0719 19:35:14.959997   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetMachineName
	I0719 19:35:14.960252   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetIP
	I0719 19:35:14.963262   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.963701   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.963732   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.963997   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:14.966731   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.967155   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.967176   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.967343   68019 provision.go:143] copyHostCerts
	I0719 19:35:14.967418   68019 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 19:35:14.967435   68019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 19:35:14.967506   68019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 19:35:14.967642   68019 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 19:35:14.967657   68019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 19:35:14.967696   68019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 19:35:14.967783   68019 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 19:35:14.967796   68019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 19:35:14.967826   68019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 19:35:14.967924   68019 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.no-preload-971041 san=[127.0.0.1 192.168.50.119 localhost minikube no-preload-971041]
	I0719 19:35:15.077149   68019 provision.go:177] copyRemoteCerts
	I0719 19:35:15.077207   68019 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 19:35:15.077229   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:15.079901   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.080234   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.080265   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.080489   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:15.080694   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.080897   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:15.081069   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:35:15.161684   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 19:35:15.186551   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0719 19:35:15.208816   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 19:35:15.233238   68019 provision.go:87] duration metric: took 273.242567ms to configureAuth
	I0719 19:35:15.233264   68019 buildroot.go:189] setting minikube options for container-runtime
	I0719 19:35:15.233500   68019 config.go:182] Loaded profile config "no-preload-971041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 19:35:15.233593   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:15.236166   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.236575   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.236621   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.236816   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:15.237047   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.237204   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.237393   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:15.237538   68019 main.go:141] libmachine: Using SSH client type: native
	I0719 19:35:15.237727   68019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.119 22 <nil> <nil>}
	I0719 19:35:15.237752   68019 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 19:35:15.489927   68019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 19:35:15.489955   68019 machine.go:97] duration metric: took 860.992755ms to provisionDockerMachine
	I0719 19:35:15.489968   68019 start.go:293] postStartSetup for "no-preload-971041" (driver="kvm2")
	I0719 19:35:15.489983   68019 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 19:35:15.490011   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:35:15.490360   68019 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 19:35:15.490390   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:15.493498   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.493895   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.493921   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.494033   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:15.494190   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.494355   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:15.494522   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:35:15.575024   68019 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 19:35:15.579078   68019 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 19:35:15.579102   68019 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 19:35:15.579180   68019 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 19:35:15.579269   68019 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 19:35:15.579483   68019 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 19:35:15.589240   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:35:15.612071   68019 start.go:296] duration metric: took 122.087671ms for postStartSetup
	I0719 19:35:15.612115   68019 fix.go:56] duration metric: took 20.143554276s for fixHost
	I0719 19:35:15.612139   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:15.614679   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.614969   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.614994   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.615141   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:15.615362   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.615517   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.615645   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:15.615872   68019 main.go:141] libmachine: Using SSH client type: native
	I0719 19:35:15.616088   68019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.119 22 <nil> <nil>}
	I0719 19:35:15.616103   68019 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 19:35:15.712393   68019 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721417715.673203772
	
	I0719 19:35:15.712417   68019 fix.go:216] guest clock: 1721417715.673203772
	I0719 19:35:15.712430   68019 fix.go:229] Guest: 2024-07-19 19:35:15.673203772 +0000 UTC Remote: 2024-07-19 19:35:15.612120403 +0000 UTC m=+357.807790755 (delta=61.083369ms)
	I0719 19:35:15.712454   68019 fix.go:200] guest clock delta is within tolerance: 61.083369ms
	I0719 19:35:15.712468   68019 start.go:83] releasing machines lock for "no-preload-971041", held for 20.243943716s
	I0719 19:35:15.712492   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:35:15.712798   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetIP
	I0719 19:35:15.715651   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.716143   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.716167   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.716435   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:35:15.717039   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:35:15.717281   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:35:15.717381   68019 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 19:35:15.717428   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:15.717554   68019 ssh_runner.go:195] Run: cat /version.json
	I0719 19:35:15.717588   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:15.720120   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.720461   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.720489   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.720510   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.720731   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:15.720860   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.720891   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.720912   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.721167   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:15.721179   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:15.721365   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:35:15.721419   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.721559   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:15.721711   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:35:15.834816   68019 ssh_runner.go:195] Run: systemctl --version
	I0719 19:35:15.841452   68019 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 19:35:15.982250   68019 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 19:35:15.989060   68019 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 19:35:15.989135   68019 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 19:35:16.003901   68019 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 19:35:16.003926   68019 start.go:495] detecting cgroup driver to use...
	I0719 19:35:16.003995   68019 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 19:35:16.023680   68019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 19:35:16.038740   68019 docker.go:217] disabling cri-docker service (if available) ...
	I0719 19:35:16.038818   68019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 19:35:16.053907   68019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 19:35:16.067484   68019 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 19:35:16.186006   68019 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 19:35:16.339983   68019 docker.go:233] disabling docker service ...
	I0719 19:35:16.340044   68019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 19:35:16.353948   68019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 19:35:16.367479   68019 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 19:35:16.513192   68019 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 19:35:16.642277   68019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 19:35:16.656031   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 19:35:16.674529   68019 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0719 19:35:16.674606   68019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.684907   68019 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 19:35:16.684974   68019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.695542   68019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.706406   68019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.716477   68019 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 19:35:16.726989   68019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.736973   68019 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.753617   68019 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.764022   68019 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 19:35:16.772871   68019 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 19:35:16.772919   68019 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 19:35:16.785870   68019 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 19:35:16.794887   68019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:35:16.942160   68019 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 19:35:17.084983   68019 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 19:35:17.085048   68019 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 19:35:17.089799   68019 start.go:563] Will wait 60s for crictl version
	I0719 19:35:17.089866   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.093813   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 19:35:17.130312   68019 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 19:35:17.130390   68019 ssh_runner.go:195] Run: crio --version
	I0719 19:35:17.157666   68019 ssh_runner.go:195] Run: crio --version
	I0719 19:35:17.186672   68019 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0719 19:35:17.188084   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetIP
	I0719 19:35:17.190975   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:17.191340   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:17.191369   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:17.191561   68019 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0719 19:35:17.195580   68019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:35:17.207427   68019 kubeadm.go:883] updating cluster {Name:no-preload-971041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-971041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.119 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 19:35:17.207577   68019 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0719 19:35:17.207631   68019 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:35:17.243262   68019 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0719 19:35:17.243288   68019 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 19:35:17.243386   68019 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:17.243412   68019 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 19:35:17.243417   68019 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0719 19:35:17.243423   68019 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 19:35:17.243448   68019 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 19:35:17.243394   68019 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 19:35:17.243395   68019 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0719 19:35:17.243493   68019 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 19:35:17.245104   68019 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0719 19:35:17.245131   68019 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0719 19:35:17.245106   68019 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 19:35:17.245116   68019 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:17.245179   68019 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 19:35:17.245186   68019 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 19:35:17.245375   68019 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 19:35:17.245546   68019 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 19:35:17.462701   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 19:35:17.480663   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0719 19:35:17.488510   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 19:35:17.491593   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 19:35:17.501505   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0719 19:35:17.502452   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 19:35:17.506436   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0719 19:35:17.522704   68019 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0719 19:35:17.522760   68019 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 19:35:17.522811   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.659364   68019 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0719 19:35:17.659399   68019 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 19:35:17.659429   68019 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0719 19:35:17.659451   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.659462   68019 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 19:35:17.659479   68019 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0719 19:35:17.659515   68019 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 19:35:17.659519   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.659550   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.659591   68019 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0719 19:35:17.659549   68019 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0719 19:35:17.659619   68019 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0719 19:35:17.659627   68019 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 19:35:17.659631   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 19:35:17.659647   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.659658   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.670058   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0719 19:35:17.670089   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0719 19:35:17.671989   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 19:35:17.672178   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 19:35:17.672292   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 19:35:17.738069   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0719 19:35:17.738169   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0719 19:35:17.789982   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0719 19:35:17.790070   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0719 19:35:17.790088   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0719 19:35:17.790163   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0719 19:35:17.790104   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0719 19:35:17.790187   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0719 19:35:17.791690   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0719 19:35:17.791731   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0719 19:35:17.791748   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0719 19:35:17.791761   68019 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0719 19:35:17.791777   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0719 19:35:17.791798   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0719 19:35:17.791810   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0719 19:35:17.799893   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0719 19:35:17.800102   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0719 19:35:17.800410   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0719 19:35:14.467021   68536 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:16.467879   68536 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:18.234926   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:20.733492   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:17.119366   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:17.619359   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:18.118510   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:18.618899   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:19.119102   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:19.619423   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:20.119089   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:20.618533   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:21.118809   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:21.618616   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:18.089182   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:19.785577   68019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.993744788s)
	I0719 19:35:19.785607   68019 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.99381007s)
	I0719 19:35:19.785615   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0719 19:35:19.785628   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0719 19:35:19.785634   68019 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0719 19:35:19.785654   68019 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.993829277s)
	I0719 19:35:19.785664   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0719 19:35:19.785684   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0719 19:35:19.785741   68019 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.696527431s)
	I0719 19:35:19.785825   68019 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0719 19:35:19.785866   68019 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:19.785910   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:19.790112   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:22.178743   68019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.393030453s)
	I0719 19:35:22.178773   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0719 19:35:22.178783   68019 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0719 19:35:22.178800   68019 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.388665972s)
	I0719 19:35:22.178830   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0719 19:35:22.178848   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0719 19:35:22.178920   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0719 19:35:22.183029   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0719 19:35:18.968128   68536 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:21.467026   68536 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:21.467056   68536 pod_ready.go:81] duration metric: took 9.005780978s for pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:21.467069   68536 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cdb66" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:21.472259   68536 pod_ready.go:92] pod "kube-proxy-cdb66" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:21.472282   68536 pod_ready.go:81] duration metric: took 5.205132ms for pod "kube-proxy-cdb66" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:21.472299   68536 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:21.476400   68536 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:21.476420   68536 pod_ready.go:81] duration metric: took 4.112093ms for pod "kube-scheduler-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:21.476431   68536 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:22.733565   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:25.232147   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:22.119363   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:22.618892   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:23.118639   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:23.619386   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:24.118543   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:24.618558   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:25.118612   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:25.619231   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:26.118601   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:26.618755   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:25.462621   68019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.283764696s)
	I0719 19:35:25.462658   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0719 19:35:25.462676   68019 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0719 19:35:25.462728   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0719 19:35:26.824398   68019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.361644989s)
	I0719 19:35:26.824432   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0719 19:35:26.824443   68019 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0719 19:35:26.824492   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0719 19:35:23.484653   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:25.984229   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:27.233184   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:29.732740   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:27.118683   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:27.618797   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:28.118832   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:28.618508   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:29.118591   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:29.618542   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:30.118979   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:30.619506   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:31.118721   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:31.619394   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:28.791678   68019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.967160098s)
	I0719 19:35:28.791704   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0719 19:35:28.791716   68019 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0719 19:35:28.791769   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0719 19:35:30.745676   68019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.95388111s)
	I0719 19:35:30.745709   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0719 19:35:30.745721   68019 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0719 19:35:30.745769   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0719 19:35:31.391036   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0719 19:35:31.391080   68019 cache_images.go:123] Successfully loaded all cached images
	I0719 19:35:31.391085   68019 cache_images.go:92] duration metric: took 14.147757536s to LoadCachedImages
	I0719 19:35:31.391098   68019 kubeadm.go:934] updating node { 192.168.50.119 8443 v1.31.0-beta.0 crio true true} ...
	I0719 19:35:31.391207   68019 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-971041 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.119
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-971041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 19:35:31.391282   68019 ssh_runner.go:195] Run: crio config
	I0719 19:35:31.434562   68019 cni.go:84] Creating CNI manager for ""
	I0719 19:35:31.434584   68019 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:35:31.434594   68019 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 19:35:31.434617   68019 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.119 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-971041 NodeName:no-preload-971041 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.119"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.119 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 19:35:31.434745   68019 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.119
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-971041"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.119
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.119"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 19:35:31.434803   68019 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0719 19:35:31.446259   68019 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 19:35:31.446337   68019 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 19:35:31.456298   68019 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0719 19:35:31.473270   68019 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0719 19:35:31.490353   68019 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0719 19:35:31.506996   68019 ssh_runner.go:195] Run: grep 192.168.50.119	control-plane.minikube.internal$ /etc/hosts
	I0719 19:35:31.510538   68019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.119	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:35:31.522076   68019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:35:31.638008   68019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:35:31.654421   68019 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041 for IP: 192.168.50.119
	I0719 19:35:31.654443   68019 certs.go:194] generating shared ca certs ...
	I0719 19:35:31.654458   68019 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:35:31.654637   68019 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 19:35:31.654703   68019 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 19:35:31.654732   68019 certs.go:256] generating profile certs ...
	I0719 19:35:31.654844   68019 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/client.key
	I0719 19:35:31.654929   68019 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/apiserver.key.77bb0cd9
	I0719 19:35:31.654978   68019 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/proxy-client.key
	I0719 19:35:31.655112   68019 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 19:35:31.655148   68019 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 19:35:31.655161   68019 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 19:35:31.655204   68019 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 19:35:31.655234   68019 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 19:35:31.655274   68019 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 19:35:31.655325   68019 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:35:31.656172   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 19:35:31.683885   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 19:35:31.714681   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 19:35:31.742470   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 19:35:31.770348   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0719 19:35:31.804982   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 19:35:31.830750   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 19:35:31.857826   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 19:35:31.880913   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 19:35:31.903475   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 19:35:31.925060   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 19:35:31.948414   68019 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 19:35:31.964203   68019 ssh_runner.go:195] Run: openssl version
	I0719 19:35:31.969774   68019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 19:35:31.981478   68019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 19:35:31.986937   68019 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 19:35:31.987002   68019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 19:35:31.993471   68019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 19:35:32.004689   68019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 19:35:32.015199   68019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 19:35:32.019475   68019 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 19:35:32.019514   68019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 19:35:32.024962   68019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 19:35:32.034782   68019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 19:35:32.044867   68019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:35:32.048801   68019 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:35:32.048849   68019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:35:32.054461   68019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 19:35:32.064884   68019 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 19:35:32.069053   68019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 19:35:32.074547   68019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 19:35:32.080411   68019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 19:35:32.085765   68019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 19:35:32.090933   68019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 19:35:32.096183   68019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 19:35:32.101377   68019 kubeadm.go:392] StartCluster: {Name:no-preload-971041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-971041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.119 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:35:32.101485   68019 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 19:35:32.101523   68019 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:35:32.139740   68019 cri.go:89] found id: ""
	I0719 19:35:32.139796   68019 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 19:35:32.149576   68019 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 19:35:32.149595   68019 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 19:35:32.149640   68019 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 19:35:32.159486   68019 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 19:35:32.160857   68019 kubeconfig.go:125] found "no-preload-971041" server: "https://192.168.50.119:8443"
	I0719 19:35:32.163357   68019 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 19:35:32.173671   68019 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.119
	I0719 19:35:32.173710   68019 kubeadm.go:1160] stopping kube-system containers ...
	I0719 19:35:32.173723   68019 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 19:35:32.173767   68019 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:35:32.207095   68019 cri.go:89] found id: ""
	I0719 19:35:32.207172   68019 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 19:35:32.224068   68019 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:35:32.233552   68019 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:35:32.233571   68019 kubeadm.go:157] found existing configuration files:
	
	I0719 19:35:32.233623   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:35:32.241920   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:35:32.241971   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:35:32.250594   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:35:32.258910   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:35:32.258961   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:35:32.273812   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:35:32.282220   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:35:32.282277   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:35:32.290961   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:35:32.299396   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:35:32.299498   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:35:32.308343   68019 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:35:32.317592   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:32.421587   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:28.482817   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:30.982904   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:32.983517   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:32.231730   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:34.232127   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:32.118495   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:32.618922   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:33.119055   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:33.619189   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:34.118644   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:34.618987   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:35.119225   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:35.619447   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:36.118778   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:36.619327   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:33.447745   68019 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.026128232s)
	I0719 19:35:33.447770   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:33.662089   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:33.725683   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:33.792637   68019 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:35:33.792728   68019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:34.292798   68019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:34.793080   68019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:34.807168   68019 api_server.go:72] duration metric: took 1.014530984s to wait for apiserver process to appear ...
	I0719 19:35:34.807196   68019 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:35:34.807224   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:35:34.807771   68019 api_server.go:269] stopped: https://192.168.50.119:8443/healthz: Get "https://192.168.50.119:8443/healthz": dial tcp 192.168.50.119:8443: connect: connection refused
	I0719 19:35:35.308340   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:35:37.476017   68019 api_server.go:279] https://192.168.50.119:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:35:37.476054   68019 api_server.go:103] status: https://192.168.50.119:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:35:37.476070   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:35:37.518367   68019 api_server.go:279] https://192.168.50.119:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:35:37.518394   68019 api_server.go:103] status: https://192.168.50.119:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:35:37.807881   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:35:37.817233   68019 api_server.go:279] https://192.168.50.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:35:37.817256   68019 api_server.go:103] status: https://192.168.50.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:35:34.984368   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:37.482412   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:38.307564   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:35:38.312884   68019 api_server.go:279] https://192.168.50.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:35:38.312920   68019 api_server.go:103] status: https://192.168.50.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:35:38.807444   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:35:38.811634   68019 api_server.go:279] https://192.168.50.119:8443/healthz returned 200:
	ok
	I0719 19:35:38.818446   68019 api_server.go:141] control plane version: v1.31.0-beta.0
	I0719 19:35:38.818473   68019 api_server.go:131] duration metric: took 4.011270764s to wait for apiserver health ...
	I0719 19:35:38.818484   68019 cni.go:84] Creating CNI manager for ""
	I0719 19:35:38.818493   68019 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:35:38.820420   68019 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 19:35:36.733194   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:38.734341   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:41.232296   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:37.118504   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:37.619038   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:38.118793   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:38.619025   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:39.118668   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:39.619156   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:40.118595   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:40.618638   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:41.119418   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:41.618806   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:38.821795   68019 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 19:35:38.833825   68019 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 19:35:38.855799   68019 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:35:38.866571   68019 system_pods.go:59] 8 kube-system pods found
	I0719 19:35:38.866614   68019 system_pods.go:61] "coredns-5cfdc65f69-dzgdb" [1aca7daa-f2b9-41f9-ab4f-f6dcdf5a5655] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 19:35:38.866623   68019 system_pods.go:61] "etcd-no-preload-971041" [a2001218-3485-47a4-b7ba-bc5c41227b84] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 19:35:38.866632   68019 system_pods.go:61] "kube-apiserver-no-preload-971041" [f1a82c83-f8fc-44da-ba9c-558422a050cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 19:35:38.866641   68019 system_pods.go:61] "kube-controller-manager-no-preload-971041" [d760c600-3d53-436f-80b1-43f97c8a1d97] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 19:35:38.866649   68019 system_pods.go:61] "kube-proxy-xnwc9" [f709e007-bc70-4aed-a66c-c7a8218df4c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0719 19:35:38.866661   68019 system_pods.go:61] "kube-scheduler-no-preload-971041" [0751f461-931d-4ae9-bba2-393487903f77] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0719 19:35:38.866671   68019 system_pods.go:61] "metrics-server-78fcd8795b-j78vm" [dd7ef76a-38b2-4660-88ce-a4dbe6bdd43e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:35:38.866683   68019 system_pods.go:61] "storage-provisioner" [68333fba-f647-4ddc-81a1-c514426efb1c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 19:35:38.866690   68019 system_pods.go:74] duration metric: took 10.871899ms to wait for pod list to return data ...
	I0719 19:35:38.866700   68019 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:35:38.871059   68019 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:35:38.871085   68019 node_conditions.go:123] node cpu capacity is 2
	I0719 19:35:38.871096   68019 node_conditions.go:105] duration metric: took 4.388431ms to run NodePressure ...
	I0719 19:35:38.871117   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:39.202189   68019 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 19:35:39.208833   68019 kubeadm.go:739] kubelet initialised
	I0719 19:35:39.208852   68019 kubeadm.go:740] duration metric: took 6.63993ms waiting for restarted kubelet to initialise ...
	I0719 19:35:39.208860   68019 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:35:39.213105   68019 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-dzgdb" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:41.221061   68019 pod_ready.go:102] pod "coredns-5cfdc65f69-dzgdb" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:39.482629   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:41.484400   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:43.732393   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:46.230839   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:42.118791   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:42.619397   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:43.119214   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:43.619520   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:44.118776   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:44.618539   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:45.118556   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:45.619338   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:46.119336   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:46.618522   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:43.221724   68019 pod_ready.go:102] pod "coredns-5cfdc65f69-dzgdb" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:44.720005   68019 pod_ready.go:92] pod "coredns-5cfdc65f69-dzgdb" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:44.720027   68019 pod_ready.go:81] duration metric: took 5.506899416s for pod "coredns-5cfdc65f69-dzgdb" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:44.720036   68019 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:46.725895   68019 pod_ready.go:102] pod "etcd-no-preload-971041" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:43.983502   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:46.482945   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:48.232409   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:50.232711   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:47.119180   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:47.618740   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:48.118755   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:48.618715   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:49.118497   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:49.619062   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:50.118758   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:50.618820   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:51.119238   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:51.618777   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:48.726634   68019 pod_ready.go:102] pod "etcd-no-preload-971041" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:50.726838   68019 pod_ready.go:102] pod "etcd-no-preload-971041" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:51.727136   68019 pod_ready.go:92] pod "etcd-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:51.727157   68019 pod_ready.go:81] duration metric: took 7.007114402s for pod "etcd-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.727166   68019 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.731799   68019 pod_ready.go:92] pod "kube-apiserver-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:51.731826   68019 pod_ready.go:81] duration metric: took 4.653065ms for pod "kube-apiserver-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.731868   68019 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.736589   68019 pod_ready.go:92] pod "kube-controller-manager-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:51.736609   68019 pod_ready.go:81] duration metric: took 4.730944ms for pod "kube-controller-manager-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.736622   68019 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xnwc9" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.740944   68019 pod_ready.go:92] pod "kube-proxy-xnwc9" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:51.740969   68019 pod_ready.go:81] duration metric: took 4.339031ms for pod "kube-proxy-xnwc9" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.740990   68019 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.744978   68019 pod_ready.go:92] pod "kube-scheduler-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:51.744996   68019 pod_ready.go:81] duration metric: took 3.998956ms for pod "kube-scheduler-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.745004   68019 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:48.983216   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:50.983267   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:52.234034   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:54.234538   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:52.119537   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:52.618848   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:53.118706   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:53.618985   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:54.119456   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:54.618507   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:55.118511   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:55.618721   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:56.118586   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:56.618540   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:53.751346   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:56.252649   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:53.482767   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:55.982603   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:56.732054   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:59.231910   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:57.118852   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:57.619296   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:58.118571   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:58.619133   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:59.119529   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:59.618552   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:00.118563   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:00.618818   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:01.118545   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:01.619292   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:58.751344   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:00.751687   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:58.482346   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:00.983025   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:02.983499   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:01.731466   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:03.732078   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:06.232464   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:02.118552   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:02.618759   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:03.119280   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:03.619453   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:04.119391   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:04.619016   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:05.118566   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:05.118653   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:05.157126   68995 cri.go:89] found id: ""
	I0719 19:36:05.157154   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.157163   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:05.157169   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:05.157218   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:05.190115   68995 cri.go:89] found id: ""
	I0719 19:36:05.190143   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.190153   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:05.190161   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:05.190218   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:05.222515   68995 cri.go:89] found id: ""
	I0719 19:36:05.222542   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.222550   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:05.222559   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:05.222609   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:05.264077   68995 cri.go:89] found id: ""
	I0719 19:36:05.264096   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.264103   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:05.264108   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:05.264180   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:05.305227   68995 cri.go:89] found id: ""
	I0719 19:36:05.305260   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.305271   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:05.305278   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:05.305365   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:05.346841   68995 cri.go:89] found id: ""
	I0719 19:36:05.346870   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.346883   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:05.346893   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:05.346948   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:05.386787   68995 cri.go:89] found id: ""
	I0719 19:36:05.386816   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.386826   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:05.386832   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:05.386888   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:05.425126   68995 cri.go:89] found id: ""
	I0719 19:36:05.425151   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.425161   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:05.425172   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:05.425187   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:05.476260   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:05.476291   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:05.489614   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:05.489639   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:05.616060   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:05.616082   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:05.616095   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:05.680089   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:05.680121   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:03.251437   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:05.253289   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:07.751972   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:05.483082   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:07.983293   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:08.233421   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:10.731139   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:08.219734   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:08.234371   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:08.234437   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:08.279970   68995 cri.go:89] found id: ""
	I0719 19:36:08.279998   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.280009   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:08.280016   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:08.280068   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:08.315997   68995 cri.go:89] found id: ""
	I0719 19:36:08.316026   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.316037   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:08.316044   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:08.316108   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:08.350044   68995 cri.go:89] found id: ""
	I0719 19:36:08.350073   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.350082   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:08.350089   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:08.350149   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:08.407460   68995 cri.go:89] found id: ""
	I0719 19:36:08.407482   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.407494   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:08.407501   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:08.407559   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:08.442445   68995 cri.go:89] found id: ""
	I0719 19:36:08.442476   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.442486   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:08.442493   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:08.442564   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:08.478149   68995 cri.go:89] found id: ""
	I0719 19:36:08.478178   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.478193   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:08.478200   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:08.478261   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:08.523968   68995 cri.go:89] found id: ""
	I0719 19:36:08.523996   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.524003   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:08.524009   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:08.524083   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:08.558285   68995 cri.go:89] found id: ""
	I0719 19:36:08.558312   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.558322   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:08.558332   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:08.558347   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:08.570518   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:08.570544   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:08.642837   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:08.642861   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:08.642874   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:08.716865   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:08.716905   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:08.760499   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:08.760530   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:11.312231   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:11.326565   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:11.326640   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:11.359787   68995 cri.go:89] found id: ""
	I0719 19:36:11.359813   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.359823   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:11.359829   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:11.359905   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:11.392541   68995 cri.go:89] found id: ""
	I0719 19:36:11.392565   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.392573   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:11.392578   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:11.392626   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:11.426313   68995 cri.go:89] found id: ""
	I0719 19:36:11.426340   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.426357   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:11.426363   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:11.426426   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:11.463138   68995 cri.go:89] found id: ""
	I0719 19:36:11.463164   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.463174   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:11.463181   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:11.463242   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:11.496506   68995 cri.go:89] found id: ""
	I0719 19:36:11.496531   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.496542   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:11.496549   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:11.496610   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:11.533917   68995 cri.go:89] found id: ""
	I0719 19:36:11.533947   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.533960   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:11.533969   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:11.534048   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:11.567245   68995 cri.go:89] found id: ""
	I0719 19:36:11.567268   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.567277   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:11.567283   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:11.567332   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:11.609340   68995 cri.go:89] found id: ""
	I0719 19:36:11.609365   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.609376   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:11.609386   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:11.609400   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:11.659918   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:11.659953   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:11.673338   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:11.673371   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:11.745829   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:11.745854   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:11.745868   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:11.816860   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:11.816897   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:10.251374   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:12.252231   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:09.983472   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:12.482718   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:12.732496   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:15.231796   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:14.357653   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:14.371820   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:14.371900   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:14.408447   68995 cri.go:89] found id: ""
	I0719 19:36:14.408478   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.408488   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:14.408512   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:14.408583   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:14.443253   68995 cri.go:89] found id: ""
	I0719 19:36:14.443283   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.443319   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:14.443329   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:14.443403   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:14.476705   68995 cri.go:89] found id: ""
	I0719 19:36:14.476730   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.476739   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:14.476746   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:14.476806   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:14.508251   68995 cri.go:89] found id: ""
	I0719 19:36:14.508276   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.508286   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:14.508294   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:14.508365   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:14.540822   68995 cri.go:89] found id: ""
	I0719 19:36:14.540850   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.540860   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:14.540868   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:14.540930   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:14.574264   68995 cri.go:89] found id: ""
	I0719 19:36:14.574294   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.574320   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:14.574329   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:14.574409   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:14.606520   68995 cri.go:89] found id: ""
	I0719 19:36:14.606553   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.606565   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:14.606573   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:14.606646   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:14.640693   68995 cri.go:89] found id: ""
	I0719 19:36:14.640722   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.640731   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:14.640743   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:14.640757   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:14.697534   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:14.697575   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:14.710323   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:14.710365   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:14.781478   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:14.781501   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:14.781516   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:14.887498   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:14.887535   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:14.750464   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:16.751420   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:14.483133   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:16.982943   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:17.732205   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:19.733051   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:17.427726   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:17.441950   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:17.442041   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:17.475370   68995 cri.go:89] found id: ""
	I0719 19:36:17.475398   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.475408   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:17.475415   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:17.475474   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:17.508258   68995 cri.go:89] found id: ""
	I0719 19:36:17.508289   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.508307   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:17.508315   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:17.508380   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:17.542625   68995 cri.go:89] found id: ""
	I0719 19:36:17.542656   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.542668   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:17.542676   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:17.542750   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:17.583168   68995 cri.go:89] found id: ""
	I0719 19:36:17.583199   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.583210   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:17.583218   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:17.583280   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:17.619339   68995 cri.go:89] found id: ""
	I0719 19:36:17.619369   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.619380   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:17.619386   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:17.619438   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:17.653194   68995 cri.go:89] found id: ""
	I0719 19:36:17.653219   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.653227   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:17.653233   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:17.653289   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:17.689086   68995 cri.go:89] found id: ""
	I0719 19:36:17.689115   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.689123   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:17.689129   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:17.689245   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:17.723079   68995 cri.go:89] found id: ""
	I0719 19:36:17.723108   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.723115   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:17.723124   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:17.723148   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:17.736805   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:17.736829   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:17.808197   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:17.808219   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:17.808251   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:17.893127   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:17.893173   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:17.936617   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:17.936653   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:20.489347   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:20.502806   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:20.502884   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:20.543791   68995 cri.go:89] found id: ""
	I0719 19:36:20.543822   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.543846   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:20.543855   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:20.543924   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:20.582638   68995 cri.go:89] found id: ""
	I0719 19:36:20.582666   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.582678   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:20.582685   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:20.582742   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:20.632150   68995 cri.go:89] found id: ""
	I0719 19:36:20.632179   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.632192   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:20.632200   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:20.632265   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:20.665934   68995 cri.go:89] found id: ""
	I0719 19:36:20.665963   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.665974   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:20.665981   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:20.666040   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:20.703637   68995 cri.go:89] found id: ""
	I0719 19:36:20.703668   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.703679   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:20.703686   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:20.703755   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:20.736467   68995 cri.go:89] found id: ""
	I0719 19:36:20.736496   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.736506   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:20.736514   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:20.736582   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:20.771313   68995 cri.go:89] found id: ""
	I0719 19:36:20.771337   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.771345   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:20.771351   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:20.771408   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:20.804915   68995 cri.go:89] found id: ""
	I0719 19:36:20.804938   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.804944   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:20.804953   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:20.804964   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:20.880189   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:20.880229   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:20.919653   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:20.919682   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:20.968588   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:20.968626   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:20.983781   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:20.983810   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:21.050261   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:18.752612   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:21.251202   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:19.483771   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:21.983390   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:22.232680   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:24.731208   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:23.550679   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:23.563483   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:23.563541   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:23.603919   68995 cri.go:89] found id: ""
	I0719 19:36:23.603950   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.603961   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:23.603969   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:23.604034   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:23.640354   68995 cri.go:89] found id: ""
	I0719 19:36:23.640377   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.640384   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:23.640390   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:23.640441   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:23.672609   68995 cri.go:89] found id: ""
	I0719 19:36:23.672644   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.672661   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:23.672668   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:23.672732   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:23.709547   68995 cri.go:89] found id: ""
	I0719 19:36:23.709577   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.709591   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:23.709598   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:23.709658   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:23.742474   68995 cri.go:89] found id: ""
	I0719 19:36:23.742498   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.742508   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:23.742514   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:23.742578   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:23.779339   68995 cri.go:89] found id: ""
	I0719 19:36:23.779372   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.779385   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:23.779393   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:23.779462   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:23.811984   68995 cri.go:89] found id: ""
	I0719 19:36:23.812010   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.812022   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:23.812031   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:23.812097   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:23.844692   68995 cri.go:89] found id: ""
	I0719 19:36:23.844716   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.844724   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:23.844733   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:23.844746   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:23.899688   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:23.899721   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:23.915771   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:23.915799   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:23.986509   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:23.986529   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:23.986543   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:24.065621   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:24.065653   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:26.602809   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:26.615541   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:26.615616   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:26.647648   68995 cri.go:89] found id: ""
	I0719 19:36:26.647671   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.647680   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:26.647685   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:26.647742   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:26.682383   68995 cri.go:89] found id: ""
	I0719 19:36:26.682409   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.682417   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:26.682423   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:26.682468   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:26.715374   68995 cri.go:89] found id: ""
	I0719 19:36:26.715398   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.715405   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:26.715411   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:26.715482   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:26.748636   68995 cri.go:89] found id: ""
	I0719 19:36:26.748663   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.748673   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:26.748680   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:26.748742   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:26.783186   68995 cri.go:89] found id: ""
	I0719 19:36:26.783218   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.783229   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:26.783236   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:26.783299   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:26.813344   68995 cri.go:89] found id: ""
	I0719 19:36:26.813370   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.813380   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:26.813386   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:26.813434   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:26.845054   68995 cri.go:89] found id: ""
	I0719 19:36:26.845083   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.845093   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:26.845100   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:26.845166   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:26.878589   68995 cri.go:89] found id: ""
	I0719 19:36:26.878614   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.878621   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:26.878629   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:26.878640   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:26.931251   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:26.931284   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:26.944193   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:26.944230   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 19:36:23.252710   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:25.257011   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:27.751677   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:24.482571   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:26.982654   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:26.732883   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:29.233481   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	W0719 19:36:27.011803   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:27.011826   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:27.011846   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:27.085331   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:27.085364   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:29.625633   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:29.639941   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:29.640002   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:29.673642   68995 cri.go:89] found id: ""
	I0719 19:36:29.673669   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.673678   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:29.673683   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:29.673744   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:29.710482   68995 cri.go:89] found id: ""
	I0719 19:36:29.710507   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.710515   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:29.710521   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:29.710571   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:29.746719   68995 cri.go:89] found id: ""
	I0719 19:36:29.746744   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.746754   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:29.746761   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:29.746823   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:29.783230   68995 cri.go:89] found id: ""
	I0719 19:36:29.783256   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.783264   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:29.783269   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:29.783318   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:29.817944   68995 cri.go:89] found id: ""
	I0719 19:36:29.817971   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.817980   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:29.817986   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:29.818049   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:29.851277   68995 cri.go:89] found id: ""
	I0719 19:36:29.851305   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.851315   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:29.851328   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:29.851387   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:29.884476   68995 cri.go:89] found id: ""
	I0719 19:36:29.884502   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.884510   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:29.884516   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:29.884566   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:29.916564   68995 cri.go:89] found id: ""
	I0719 19:36:29.916592   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.916600   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:29.916608   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:29.916620   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:29.967970   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:29.968009   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:29.981767   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:29.981792   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:30.046914   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:30.046939   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:30.046953   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:30.125610   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:30.125644   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:29.753033   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:32.251239   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:29.483807   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:31.982677   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:31.732069   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:34.231495   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:36.232914   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:32.661765   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:32.676274   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:32.676354   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:32.712439   68995 cri.go:89] found id: ""
	I0719 19:36:32.712460   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.712468   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:32.712474   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:32.712534   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:32.745933   68995 cri.go:89] found id: ""
	I0719 19:36:32.745956   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.745965   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:32.745972   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:32.746035   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:32.778900   68995 cri.go:89] found id: ""
	I0719 19:36:32.778920   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.778927   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:32.778932   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:32.778988   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:32.809927   68995 cri.go:89] found id: ""
	I0719 19:36:32.809956   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.809966   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:32.809979   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:32.810038   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:32.840388   68995 cri.go:89] found id: ""
	I0719 19:36:32.840409   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.840417   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:32.840422   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:32.840472   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:32.873390   68995 cri.go:89] found id: ""
	I0719 19:36:32.873419   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.873428   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:32.873436   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:32.873499   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:32.909278   68995 cri.go:89] found id: ""
	I0719 19:36:32.909304   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.909311   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:32.909317   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:32.909374   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:32.941310   68995 cri.go:89] found id: ""
	I0719 19:36:32.941339   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.941346   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:32.941358   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:32.941370   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:32.992848   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:32.992875   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:33.006272   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:33.006305   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:33.068323   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:33.068358   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:33.068372   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:33.148030   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:33.148065   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:35.684957   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:35.697753   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:35.697823   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:35.733470   68995 cri.go:89] found id: ""
	I0719 19:36:35.733492   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.733500   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:35.733508   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:35.733565   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:35.766009   68995 cri.go:89] found id: ""
	I0719 19:36:35.766029   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.766037   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:35.766047   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:35.766104   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:35.799205   68995 cri.go:89] found id: ""
	I0719 19:36:35.799228   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.799238   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:35.799245   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:35.799303   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:35.834102   68995 cri.go:89] found id: ""
	I0719 19:36:35.834129   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.834139   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:35.834147   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:35.834217   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:35.867545   68995 cri.go:89] found id: ""
	I0719 19:36:35.867578   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.867588   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:35.867597   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:35.867653   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:35.902804   68995 cri.go:89] found id: ""
	I0719 19:36:35.902833   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.902856   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:35.902862   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:35.902917   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:35.935798   68995 cri.go:89] found id: ""
	I0719 19:36:35.935826   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.935851   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:35.935858   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:35.935918   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:35.968992   68995 cri.go:89] found id: ""
	I0719 19:36:35.969026   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.969037   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:35.969049   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:35.969068   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:36.054017   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:36.054050   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:36.094850   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:36.094875   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:36.145996   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:36.146030   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:36.159520   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:36.159549   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:36.227943   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:34.751556   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:36.754350   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:34.483433   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:36.982914   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:38.732952   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:41.231720   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:38.728958   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:38.742662   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:38.742727   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:38.779815   68995 cri.go:89] found id: ""
	I0719 19:36:38.779856   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.779867   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:38.779874   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:38.779938   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:38.814393   68995 cri.go:89] found id: ""
	I0719 19:36:38.814418   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.814428   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:38.814435   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:38.814496   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:38.848480   68995 cri.go:89] found id: ""
	I0719 19:36:38.848501   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.848508   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:38.848513   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:38.848565   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:38.883737   68995 cri.go:89] found id: ""
	I0719 19:36:38.883787   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.883796   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:38.883802   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:38.883887   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:38.921331   68995 cri.go:89] found id: ""
	I0719 19:36:38.921365   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.921376   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:38.921384   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:38.921432   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:38.957590   68995 cri.go:89] found id: ""
	I0719 19:36:38.957617   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.957625   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:38.957630   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:38.957689   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:38.992056   68995 cri.go:89] found id: ""
	I0719 19:36:38.992081   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.992091   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:38.992099   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:38.992162   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:39.026137   68995 cri.go:89] found id: ""
	I0719 19:36:39.026165   68995 logs.go:276] 0 containers: []
	W0719 19:36:39.026173   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:39.026181   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:39.026191   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:39.104975   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:39.105016   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:39.149479   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:39.149511   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:39.200003   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:39.200033   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:39.213690   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:39.213720   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:39.283772   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:41.784814   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:41.798307   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:41.798388   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:41.835130   68995 cri.go:89] found id: ""
	I0719 19:36:41.835160   68995 logs.go:276] 0 containers: []
	W0719 19:36:41.835181   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:41.835190   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:41.835248   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:41.871655   68995 cri.go:89] found id: ""
	I0719 19:36:41.871686   68995 logs.go:276] 0 containers: []
	W0719 19:36:41.871696   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:41.871702   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:41.871748   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:41.905291   68995 cri.go:89] found id: ""
	I0719 19:36:41.905321   68995 logs.go:276] 0 containers: []
	W0719 19:36:41.905332   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:41.905338   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:41.905387   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:41.938786   68995 cri.go:89] found id: ""
	I0719 19:36:41.938820   68995 logs.go:276] 0 containers: []
	W0719 19:36:41.938831   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:41.938839   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:41.938900   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:41.973146   68995 cri.go:89] found id: ""
	I0719 19:36:41.973170   68995 logs.go:276] 0 containers: []
	W0719 19:36:41.973178   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:41.973183   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:41.973230   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:39.251794   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:41.251895   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:39.483191   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:41.983709   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:43.233443   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:45.731442   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:42.007654   68995 cri.go:89] found id: ""
	I0719 19:36:42.007692   68995 logs.go:276] 0 containers: []
	W0719 19:36:42.007703   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:42.007713   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:42.007771   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:42.043419   68995 cri.go:89] found id: ""
	I0719 19:36:42.043454   68995 logs.go:276] 0 containers: []
	W0719 19:36:42.043465   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:42.043472   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:42.043550   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:42.079058   68995 cri.go:89] found id: ""
	I0719 19:36:42.079082   68995 logs.go:276] 0 containers: []
	W0719 19:36:42.079093   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:42.079106   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:42.079119   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:42.133699   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:42.133731   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:42.149143   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:42.149178   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:42.219380   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:42.219476   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:42.219493   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:42.300052   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:42.300089   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:44.837865   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:44.851088   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:44.851160   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:44.887593   68995 cri.go:89] found id: ""
	I0719 19:36:44.887625   68995 logs.go:276] 0 containers: []
	W0719 19:36:44.887636   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:44.887644   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:44.887704   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:44.923487   68995 cri.go:89] found id: ""
	I0719 19:36:44.923518   68995 logs.go:276] 0 containers: []
	W0719 19:36:44.923528   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:44.923535   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:44.923588   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:44.958530   68995 cri.go:89] found id: ""
	I0719 19:36:44.958552   68995 logs.go:276] 0 containers: []
	W0719 19:36:44.958560   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:44.958565   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:44.958612   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:44.993148   68995 cri.go:89] found id: ""
	I0719 19:36:44.993174   68995 logs.go:276] 0 containers: []
	W0719 19:36:44.993184   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:44.993191   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:44.993261   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:45.024580   68995 cri.go:89] found id: ""
	I0719 19:36:45.024608   68995 logs.go:276] 0 containers: []
	W0719 19:36:45.024619   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:45.024627   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:45.024687   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:45.061929   68995 cri.go:89] found id: ""
	I0719 19:36:45.061954   68995 logs.go:276] 0 containers: []
	W0719 19:36:45.061965   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:45.061974   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:45.062029   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:45.093445   68995 cri.go:89] found id: ""
	I0719 19:36:45.093473   68995 logs.go:276] 0 containers: []
	W0719 19:36:45.093482   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:45.093490   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:45.093549   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:45.129804   68995 cri.go:89] found id: ""
	I0719 19:36:45.129838   68995 logs.go:276] 0 containers: []
	W0719 19:36:45.129848   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:45.129860   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:45.129874   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:45.184333   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:45.184373   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:45.197516   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:45.197545   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:45.267374   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:45.267394   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:45.267411   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:45.343359   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:45.343400   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:43.751901   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:45.752529   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:44.482460   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:46.483629   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:48.231760   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:50.232025   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:47.880247   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:47.892644   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:47.892716   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:47.924838   68995 cri.go:89] found id: ""
	I0719 19:36:47.924866   68995 logs.go:276] 0 containers: []
	W0719 19:36:47.924874   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:47.924880   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:47.924936   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:47.956920   68995 cri.go:89] found id: ""
	I0719 19:36:47.956946   68995 logs.go:276] 0 containers: []
	W0719 19:36:47.956954   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:47.956960   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:47.957021   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:47.992589   68995 cri.go:89] found id: ""
	I0719 19:36:47.992614   68995 logs.go:276] 0 containers: []
	W0719 19:36:47.992622   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:47.992627   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:47.992681   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:48.028222   68995 cri.go:89] found id: ""
	I0719 19:36:48.028251   68995 logs.go:276] 0 containers: []
	W0719 19:36:48.028262   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:48.028270   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:48.028343   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:48.062122   68995 cri.go:89] found id: ""
	I0719 19:36:48.062144   68995 logs.go:276] 0 containers: []
	W0719 19:36:48.062152   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:48.062157   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:48.062202   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:48.095189   68995 cri.go:89] found id: ""
	I0719 19:36:48.095212   68995 logs.go:276] 0 containers: []
	W0719 19:36:48.095220   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:48.095226   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:48.095273   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:48.127726   68995 cri.go:89] found id: ""
	I0719 19:36:48.127756   68995 logs.go:276] 0 containers: []
	W0719 19:36:48.127764   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:48.127769   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:48.127820   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:48.160445   68995 cri.go:89] found id: ""
	I0719 19:36:48.160473   68995 logs.go:276] 0 containers: []
	W0719 19:36:48.160481   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:48.160490   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:48.160501   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:48.214196   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:48.214228   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:48.227942   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:48.227967   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:48.302651   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:48.302671   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:48.302684   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:48.382283   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:48.382325   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:50.921030   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:50.934783   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:50.934859   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:50.970236   68995 cri.go:89] found id: ""
	I0719 19:36:50.970268   68995 logs.go:276] 0 containers: []
	W0719 19:36:50.970282   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:50.970290   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:50.970349   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:51.003636   68995 cri.go:89] found id: ""
	I0719 19:36:51.003662   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.003669   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:51.003675   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:51.003735   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:51.035390   68995 cri.go:89] found id: ""
	I0719 19:36:51.035417   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.035425   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:51.035432   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:51.035496   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:51.069118   68995 cri.go:89] found id: ""
	I0719 19:36:51.069140   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.069148   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:51.069154   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:51.069207   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:51.102433   68995 cri.go:89] found id: ""
	I0719 19:36:51.102465   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.102476   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:51.102483   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:51.102547   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:51.135711   68995 cri.go:89] found id: ""
	I0719 19:36:51.135743   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.135755   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:51.135764   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:51.135856   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:51.168041   68995 cri.go:89] found id: ""
	I0719 19:36:51.168073   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.168083   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:51.168091   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:51.168153   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:51.205637   68995 cri.go:89] found id: ""
	I0719 19:36:51.205667   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.205674   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:51.205683   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:51.205696   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:51.245168   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:51.245197   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:51.295810   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:51.295863   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:51.308791   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:51.308813   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:51.385214   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:51.385239   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:51.385254   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:48.251909   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:50.750650   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:52.751426   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:48.983311   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:50.984588   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:52.232203   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:54.731934   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:53.959019   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:53.971044   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:53.971118   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:54.004971   68995 cri.go:89] found id: ""
	I0719 19:36:54.004996   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.005007   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:54.005014   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:54.005074   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:54.035794   68995 cri.go:89] found id: ""
	I0719 19:36:54.035820   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.035828   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:54.035849   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:54.035909   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:54.067783   68995 cri.go:89] found id: ""
	I0719 19:36:54.067811   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.067821   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:54.067828   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:54.067905   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:54.107403   68995 cri.go:89] found id: ""
	I0719 19:36:54.107424   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.107432   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:54.107437   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:54.107486   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:54.144411   68995 cri.go:89] found id: ""
	I0719 19:36:54.144434   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.144443   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:54.144448   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:54.144507   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:54.180416   68995 cri.go:89] found id: ""
	I0719 19:36:54.180444   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.180453   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:54.180459   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:54.180504   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:54.216444   68995 cri.go:89] found id: ""
	I0719 19:36:54.216476   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.216484   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:54.216489   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:54.216536   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:54.252652   68995 cri.go:89] found id: ""
	I0719 19:36:54.252672   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.252679   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:54.252686   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:54.252697   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:54.265888   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:54.265920   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:54.337271   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:54.337300   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:54.337316   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:54.414882   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:54.414916   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:54.452363   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:54.452387   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:55.252673   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:57.752884   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:53.483020   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:55.982973   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:57.983073   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:56.732297   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:58.733339   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:01.233006   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:57.004491   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:57.017581   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:57.017662   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:57.051318   68995 cri.go:89] found id: ""
	I0719 19:36:57.051347   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.051355   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:57.051362   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:57.051414   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:57.091361   68995 cri.go:89] found id: ""
	I0719 19:36:57.091386   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.091394   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:57.091399   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:57.091454   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:57.125970   68995 cri.go:89] found id: ""
	I0719 19:36:57.125998   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.126009   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:57.126016   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:57.126078   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:57.162767   68995 cri.go:89] found id: ""
	I0719 19:36:57.162794   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.162804   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:57.162811   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:57.162868   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:57.199712   68995 cri.go:89] found id: ""
	I0719 19:36:57.199742   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.199763   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:57.199770   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:57.199882   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:57.234782   68995 cri.go:89] found id: ""
	I0719 19:36:57.234812   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.234823   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:57.234830   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:57.234889   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:57.269789   68995 cri.go:89] found id: ""
	I0719 19:36:57.269817   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.269827   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:57.269835   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:57.269895   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:57.303323   68995 cri.go:89] found id: ""
	I0719 19:36:57.303350   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.303361   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:57.303371   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:57.303385   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:57.353434   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:57.353469   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:57.365982   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:57.366007   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:57.433767   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:57.433787   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:57.433801   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:57.514332   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:57.514372   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:00.053832   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:00.067665   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:00.067733   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:00.102502   68995 cri.go:89] found id: ""
	I0719 19:37:00.102529   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.102539   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:00.102547   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:00.102605   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:00.155051   68995 cri.go:89] found id: ""
	I0719 19:37:00.155088   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.155101   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:00.155109   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:00.155245   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:00.189002   68995 cri.go:89] found id: ""
	I0719 19:37:00.189032   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.189042   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:00.189049   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:00.189113   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:00.222057   68995 cri.go:89] found id: ""
	I0719 19:37:00.222083   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.222091   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:00.222097   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:00.222278   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:00.256815   68995 cri.go:89] found id: ""
	I0719 19:37:00.256839   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.256847   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:00.256853   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:00.256907   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:00.288710   68995 cri.go:89] found id: ""
	I0719 19:37:00.288742   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.288750   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:00.288756   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:00.288803   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:00.325994   68995 cri.go:89] found id: ""
	I0719 19:37:00.326019   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.326028   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:00.326034   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:00.326090   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:00.360401   68995 cri.go:89] found id: ""
	I0719 19:37:00.360434   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.360441   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:00.360450   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:00.360462   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:00.373448   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:00.373479   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:00.443796   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:00.443812   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:00.443825   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:00.529738   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:00.529777   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:00.571340   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:00.571367   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:00.251762   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:02.750698   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:59.984278   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:02.482260   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:03.731540   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:05.731921   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:03.121961   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:03.134075   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:03.134472   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:03.168000   68995 cri.go:89] found id: ""
	I0719 19:37:03.168033   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.168044   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:03.168053   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:03.168118   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:03.200331   68995 cri.go:89] found id: ""
	I0719 19:37:03.200362   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.200369   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:03.200376   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:03.200437   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:03.232472   68995 cri.go:89] found id: ""
	I0719 19:37:03.232500   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.232510   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:03.232517   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:03.232567   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:03.265173   68995 cri.go:89] found id: ""
	I0719 19:37:03.265200   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.265208   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:03.265216   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:03.265268   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:03.298983   68995 cri.go:89] found id: ""
	I0719 19:37:03.299011   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.299022   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:03.299029   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:03.299081   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:03.337990   68995 cri.go:89] found id: ""
	I0719 19:37:03.338024   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.338038   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:03.338046   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:03.338111   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:03.371374   68995 cri.go:89] found id: ""
	I0719 19:37:03.371402   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.371410   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:03.371415   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:03.371472   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:03.403890   68995 cri.go:89] found id: ""
	I0719 19:37:03.403915   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.403928   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:03.403939   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:03.403954   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:03.478826   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:03.478864   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:03.519208   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:03.519231   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:03.571603   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:03.571636   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:03.585154   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:03.585178   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:03.649156   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:06.149550   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:06.162598   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:06.162676   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:06.196635   68995 cri.go:89] found id: ""
	I0719 19:37:06.196681   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.196692   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:06.196701   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:06.196764   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:06.230855   68995 cri.go:89] found id: ""
	I0719 19:37:06.230882   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.230892   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:06.230899   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:06.230959   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:06.264715   68995 cri.go:89] found id: ""
	I0719 19:37:06.264754   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.264771   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:06.264779   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:06.264842   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:06.297725   68995 cri.go:89] found id: ""
	I0719 19:37:06.297751   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.297758   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:06.297765   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:06.297810   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:06.334943   68995 cri.go:89] found id: ""
	I0719 19:37:06.334970   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.334978   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:06.334984   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:06.335038   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:06.373681   68995 cri.go:89] found id: ""
	I0719 19:37:06.373716   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.373725   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:06.373730   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:06.373781   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:06.409528   68995 cri.go:89] found id: ""
	I0719 19:37:06.409554   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.409564   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:06.409609   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:06.409682   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:06.444636   68995 cri.go:89] found id: ""
	I0719 19:37:06.444671   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.444680   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:06.444715   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:06.444734   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:06.499678   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:06.499714   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:06.514790   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:06.514821   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:06.604576   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:06.604597   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:06.604615   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:06.690956   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:06.690989   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:04.752636   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:07.251643   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:04.484460   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:06.982794   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:08.231684   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:10.233049   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:09.228994   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:09.243444   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:09.243506   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:09.277208   68995 cri.go:89] found id: ""
	I0719 19:37:09.277229   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.277237   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:09.277247   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:09.277296   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:09.309077   68995 cri.go:89] found id: ""
	I0719 19:37:09.309102   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.309110   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:09.309115   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:09.309168   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:09.341212   68995 cri.go:89] found id: ""
	I0719 19:37:09.341247   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.341258   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:09.341265   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:09.341332   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:09.376373   68995 cri.go:89] found id: ""
	I0719 19:37:09.376400   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.376410   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:09.376416   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:09.376475   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:09.409175   68995 cri.go:89] found id: ""
	I0719 19:37:09.409204   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.409215   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:09.409222   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:09.409277   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:09.441481   68995 cri.go:89] found id: ""
	I0719 19:37:09.441509   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.441518   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:09.441526   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:09.441588   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:09.472907   68995 cri.go:89] found id: ""
	I0719 19:37:09.472935   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.472946   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:09.472954   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:09.473008   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:09.507776   68995 cri.go:89] found id: ""
	I0719 19:37:09.507801   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.507810   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:09.507818   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:09.507850   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:09.559873   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:09.559910   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:09.572650   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:09.572677   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:09.644352   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:09.644375   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:09.644387   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:09.721464   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:09.721504   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:09.252776   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:11.751354   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:08.983314   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:10.983634   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:12.732203   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:15.234561   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:12.264113   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:12.276702   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:12.276758   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:12.321359   68995 cri.go:89] found id: ""
	I0719 19:37:12.321386   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.321393   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:12.321400   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:12.321471   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:12.365402   68995 cri.go:89] found id: ""
	I0719 19:37:12.365433   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.365441   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:12.365447   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:12.365505   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:12.409431   68995 cri.go:89] found id: ""
	I0719 19:37:12.409459   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.409471   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:12.409478   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:12.409526   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:12.447628   68995 cri.go:89] found id: ""
	I0719 19:37:12.447656   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.447665   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:12.447671   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:12.447720   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:12.481760   68995 cri.go:89] found id: ""
	I0719 19:37:12.481796   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.481806   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:12.481813   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:12.481875   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:12.513430   68995 cri.go:89] found id: ""
	I0719 19:37:12.513460   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.513471   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:12.513477   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:12.513525   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:12.546140   68995 cri.go:89] found id: ""
	I0719 19:37:12.546171   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.546182   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:12.546190   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:12.546258   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:12.578741   68995 cri.go:89] found id: ""
	I0719 19:37:12.578764   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.578774   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:12.578785   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:12.578800   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:12.647584   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:12.647610   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:12.647625   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:12.721989   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:12.722027   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:12.761141   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:12.761168   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:12.811283   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:12.811334   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:15.327447   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:15.339987   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:15.340053   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:15.373692   68995 cri.go:89] found id: ""
	I0719 19:37:15.373720   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.373729   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:15.373737   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:15.373794   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:15.405801   68995 cri.go:89] found id: ""
	I0719 19:37:15.405831   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.405842   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:15.405849   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:15.405915   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:15.439395   68995 cri.go:89] found id: ""
	I0719 19:37:15.439424   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.439436   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:15.439444   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:15.439513   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:15.475944   68995 cri.go:89] found id: ""
	I0719 19:37:15.475967   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.475975   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:15.475981   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:15.476029   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:15.509593   68995 cri.go:89] found id: ""
	I0719 19:37:15.509613   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.509627   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:15.509634   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:15.509695   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:15.548840   68995 cri.go:89] found id: ""
	I0719 19:37:15.548871   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.548881   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:15.548887   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:15.548948   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:15.581096   68995 cri.go:89] found id: ""
	I0719 19:37:15.581139   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.581155   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:15.581163   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:15.581223   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:15.617428   68995 cri.go:89] found id: ""
	I0719 19:37:15.617458   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.617469   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:15.617479   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:15.617495   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:15.688196   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:15.688223   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:15.688239   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:15.770179   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:15.770216   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:15.807899   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:15.807924   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:15.858152   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:15.858186   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:13.751672   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:16.251465   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:13.482766   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:15.485709   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:17.982675   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:17.731547   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:19.732093   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:18.370857   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:18.383710   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:18.383781   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:18.417000   68995 cri.go:89] found id: ""
	I0719 19:37:18.417032   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.417044   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:18.417051   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:18.417149   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:18.455852   68995 cri.go:89] found id: ""
	I0719 19:37:18.455884   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.455900   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:18.455907   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:18.455970   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:18.491458   68995 cri.go:89] found id: ""
	I0719 19:37:18.491490   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.491501   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:18.491508   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:18.491573   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:18.525125   68995 cri.go:89] found id: ""
	I0719 19:37:18.525151   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.525158   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:18.525163   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:18.525210   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:18.560527   68995 cri.go:89] found id: ""
	I0719 19:37:18.560555   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.560566   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:18.560574   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:18.560624   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:18.595328   68995 cri.go:89] found id: ""
	I0719 19:37:18.595365   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.595378   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:18.595388   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:18.595445   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:18.627669   68995 cri.go:89] found id: ""
	I0719 19:37:18.627697   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.627705   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:18.627711   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:18.627765   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:18.660140   68995 cri.go:89] found id: ""
	I0719 19:37:18.660168   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.660177   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:18.660191   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:18.660206   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:18.733599   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:18.733628   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:18.774893   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:18.774920   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:18.827635   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:18.827672   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:18.841519   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:18.841545   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:18.904744   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:21.405683   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:21.420145   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:21.420224   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:21.456434   68995 cri.go:89] found id: ""
	I0719 19:37:21.456465   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.456477   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:21.456485   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:21.456543   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:21.489839   68995 cri.go:89] found id: ""
	I0719 19:37:21.489867   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.489874   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:21.489893   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:21.489945   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:21.521492   68995 cri.go:89] found id: ""
	I0719 19:37:21.521523   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.521533   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:21.521540   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:21.521602   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:21.553974   68995 cri.go:89] found id: ""
	I0719 19:37:21.553998   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.554008   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:21.554015   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:21.554075   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:21.589000   68995 cri.go:89] found id: ""
	I0719 19:37:21.589030   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.589042   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:21.589050   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:21.589119   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:21.622617   68995 cri.go:89] found id: ""
	I0719 19:37:21.622640   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.622648   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:21.622653   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:21.622702   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:21.654149   68995 cri.go:89] found id: ""
	I0719 19:37:21.654171   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.654178   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:21.654183   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:21.654241   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:21.687587   68995 cri.go:89] found id: ""
	I0719 19:37:21.687616   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.687628   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:21.687640   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:21.687656   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:21.699883   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:21.699910   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:21.769804   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:21.769825   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:21.769839   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:21.857167   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:21.857205   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:21.896025   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:21.896050   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:18.251893   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:20.751093   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:19.989216   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:22.483346   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:22.231129   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:24.231620   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:26.232522   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:24.444695   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:24.457239   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:24.457310   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:24.491968   68995 cri.go:89] found id: ""
	I0719 19:37:24.491990   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.491999   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:24.492003   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:24.492049   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:24.526039   68995 cri.go:89] found id: ""
	I0719 19:37:24.526061   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.526069   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:24.526074   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:24.526120   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:24.560153   68995 cri.go:89] found id: ""
	I0719 19:37:24.560184   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.560194   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:24.560201   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:24.560254   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:24.594331   68995 cri.go:89] found id: ""
	I0719 19:37:24.594359   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.594369   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:24.594375   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:24.594442   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:24.632389   68995 cri.go:89] found id: ""
	I0719 19:37:24.632416   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.632425   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:24.632430   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:24.632481   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:24.664448   68995 cri.go:89] found id: ""
	I0719 19:37:24.664473   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.664481   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:24.664487   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:24.664547   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:24.699510   68995 cri.go:89] found id: ""
	I0719 19:37:24.699533   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.699542   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:24.699547   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:24.699599   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:24.732624   68995 cri.go:89] found id: ""
	I0719 19:37:24.732646   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.732655   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:24.732665   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:24.732688   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:24.800136   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:24.800161   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:24.800176   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:24.883652   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:24.883696   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:24.922027   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:24.922052   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:24.975191   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:24.975235   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:23.251310   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:25.251598   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:27.751985   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:24.484050   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:26.982359   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:28.237361   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:30.734572   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:27.489307   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:27.503441   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:27.503500   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:27.536099   68995 cri.go:89] found id: ""
	I0719 19:37:27.536126   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.536134   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:27.536140   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:27.536187   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:27.567032   68995 cri.go:89] found id: ""
	I0719 19:37:27.567060   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.567068   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:27.567073   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:27.567133   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:27.599396   68995 cri.go:89] found id: ""
	I0719 19:37:27.599425   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.599436   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:27.599444   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:27.599510   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:27.634718   68995 cri.go:89] found id: ""
	I0719 19:37:27.634746   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.634755   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:27.634760   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:27.634809   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:27.672731   68995 cri.go:89] found id: ""
	I0719 19:37:27.672766   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.672778   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:27.672787   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:27.672852   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:27.706705   68995 cri.go:89] found id: ""
	I0719 19:37:27.706734   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.706743   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:27.706751   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:27.706814   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:27.740520   68995 cri.go:89] found id: ""
	I0719 19:37:27.740548   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.740556   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:27.740561   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:27.740614   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:27.772629   68995 cri.go:89] found id: ""
	I0719 19:37:27.772653   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.772660   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:27.772668   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:27.772680   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:27.826404   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:27.826432   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:27.839299   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:27.839331   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:27.903888   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:27.903908   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:27.903920   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:27.981384   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:27.981418   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:30.519503   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:30.532781   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:30.532867   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:30.566867   68995 cri.go:89] found id: ""
	I0719 19:37:30.566900   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.566911   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:30.566918   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:30.566979   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:30.599130   68995 cri.go:89] found id: ""
	I0719 19:37:30.599160   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.599171   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:30.599178   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:30.599241   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:30.632636   68995 cri.go:89] found id: ""
	I0719 19:37:30.632672   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.632685   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:30.632694   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:30.632767   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:30.665793   68995 cri.go:89] found id: ""
	I0719 19:37:30.665822   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.665837   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:30.665845   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:30.665908   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:30.699002   68995 cri.go:89] found id: ""
	I0719 19:37:30.699027   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.699034   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:30.699039   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:30.699098   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:30.733654   68995 cri.go:89] found id: ""
	I0719 19:37:30.733680   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.733690   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:30.733698   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:30.733757   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:30.766219   68995 cri.go:89] found id: ""
	I0719 19:37:30.766250   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.766260   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:30.766269   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:30.766344   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:30.800884   68995 cri.go:89] found id: ""
	I0719 19:37:30.800912   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.800922   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:30.800931   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:30.800946   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:30.851079   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:30.851114   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:30.863829   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:30.863873   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:30.932194   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:30.932219   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:30.932231   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:31.012345   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:31.012377   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:30.250950   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:32.251737   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:28.982804   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:30.983656   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:32.736607   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:35.232724   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:33.549156   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:33.561253   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:33.561318   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:33.593443   68995 cri.go:89] found id: ""
	I0719 19:37:33.593469   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.593479   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:33.593487   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:33.593545   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:33.626350   68995 cri.go:89] found id: ""
	I0719 19:37:33.626371   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.626380   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:33.626385   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:33.626434   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:33.660751   68995 cri.go:89] found id: ""
	I0719 19:37:33.660789   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.660797   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:33.660803   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:33.660857   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:33.696342   68995 cri.go:89] found id: ""
	I0719 19:37:33.696380   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.696392   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:33.696399   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:33.696465   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:33.728815   68995 cri.go:89] found id: ""
	I0719 19:37:33.728841   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.728850   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:33.728860   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:33.728922   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:33.765055   68995 cri.go:89] found id: ""
	I0719 19:37:33.765088   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.765095   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:33.765101   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:33.765150   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:33.798600   68995 cri.go:89] found id: ""
	I0719 19:37:33.798627   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.798635   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:33.798640   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:33.798687   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:33.831037   68995 cri.go:89] found id: ""
	I0719 19:37:33.831065   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.831075   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:33.831085   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:33.831100   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:33.880454   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:33.880485   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:33.894966   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:33.894996   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:33.964170   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:33.964200   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:33.964215   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:34.046534   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:34.046565   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:36.587779   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:36.601141   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:36.601211   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:36.634499   68995 cri.go:89] found id: ""
	I0719 19:37:36.634527   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.634536   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:36.634542   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:36.634606   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:36.666890   68995 cri.go:89] found id: ""
	I0719 19:37:36.666917   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.666925   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:36.666931   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:36.666986   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:36.698970   68995 cri.go:89] found id: ""
	I0719 19:37:36.699001   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.699016   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:36.699023   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:36.699078   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:36.731334   68995 cri.go:89] found id: ""
	I0719 19:37:36.731368   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.731379   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:36.731386   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:36.731446   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:36.764358   68995 cri.go:89] found id: ""
	I0719 19:37:36.764382   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.764394   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:36.764400   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:36.764455   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:36.798707   68995 cri.go:89] found id: ""
	I0719 19:37:36.798734   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.798742   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:36.798747   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:36.798797   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:36.831226   68995 cri.go:89] found id: ""
	I0719 19:37:36.831255   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.831266   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:36.831273   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:36.831331   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:36.863480   68995 cri.go:89] found id: ""
	I0719 19:37:36.863506   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.863513   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:36.863522   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:36.863536   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:36.915869   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:36.915908   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:36.930750   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:36.930781   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 19:37:34.750463   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:36.751189   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:33.484028   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:35.984592   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:37.233215   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:39.732096   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	W0719 19:37:37.012674   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:37.012694   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:37.012708   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:37.091337   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:37.091375   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:39.630016   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:39.642979   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:39.643041   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:39.675984   68995 cri.go:89] found id: ""
	I0719 19:37:39.676013   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.676023   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:39.676031   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:39.676100   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:39.707035   68995 cri.go:89] found id: ""
	I0719 19:37:39.707061   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.707068   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:39.707073   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:39.707131   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:39.744471   68995 cri.go:89] found id: ""
	I0719 19:37:39.744496   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.744503   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:39.744508   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:39.744561   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:39.778135   68995 cri.go:89] found id: ""
	I0719 19:37:39.778162   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.778171   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:39.778179   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:39.778238   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:39.811690   68995 cri.go:89] found id: ""
	I0719 19:37:39.811717   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.811727   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:39.811735   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:39.811793   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:39.843697   68995 cri.go:89] found id: ""
	I0719 19:37:39.843728   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.843740   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:39.843751   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:39.843801   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:39.877406   68995 cri.go:89] found id: ""
	I0719 19:37:39.877436   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.877448   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:39.877455   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:39.877520   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:39.909759   68995 cri.go:89] found id: ""
	I0719 19:37:39.909783   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.909790   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:39.909799   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:39.909809   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:39.958364   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:39.958393   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:39.973472   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:39.973512   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:40.040299   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:40.040323   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:40.040338   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:40.117663   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:40.117702   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:38.751436   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:40.751476   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:42.752002   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:38.483729   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:40.983938   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:42.232962   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:44.732151   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:42.657713   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:42.672030   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:42.672096   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:42.706163   68995 cri.go:89] found id: ""
	I0719 19:37:42.706193   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.706204   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:42.706212   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:42.706271   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:42.740026   68995 cri.go:89] found id: ""
	I0719 19:37:42.740053   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.740063   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:42.740070   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:42.740132   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:42.772413   68995 cri.go:89] found id: ""
	I0719 19:37:42.772436   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.772443   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:42.772448   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:42.772493   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:42.803898   68995 cri.go:89] found id: ""
	I0719 19:37:42.803925   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.803936   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:42.803943   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:42.804007   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:42.835062   68995 cri.go:89] found id: ""
	I0719 19:37:42.835091   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.835100   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:42.835108   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:42.835181   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:42.867382   68995 cri.go:89] found id: ""
	I0719 19:37:42.867411   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.867420   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:42.867427   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:42.867513   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:42.899461   68995 cri.go:89] found id: ""
	I0719 19:37:42.899483   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.899491   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:42.899496   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:42.899542   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:42.932789   68995 cri.go:89] found id: ""
	I0719 19:37:42.932818   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.932827   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:42.932837   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:42.932854   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:42.945144   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:42.945174   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:43.013369   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:43.013393   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:43.013408   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:43.093227   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:43.093263   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:43.134429   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:43.134454   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:45.684772   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:45.697903   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:45.697958   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:45.731984   68995 cri.go:89] found id: ""
	I0719 19:37:45.732007   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.732015   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:45.732021   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:45.732080   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:45.767631   68995 cri.go:89] found id: ""
	I0719 19:37:45.767660   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.767670   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:45.767678   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:45.767739   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:45.800822   68995 cri.go:89] found id: ""
	I0719 19:37:45.800849   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.800860   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:45.800871   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:45.800928   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:45.834313   68995 cri.go:89] found id: ""
	I0719 19:37:45.834345   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.834354   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:45.834360   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:45.834417   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:45.868659   68995 cri.go:89] found id: ""
	I0719 19:37:45.868690   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.868701   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:45.868709   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:45.868780   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:45.900779   68995 cri.go:89] found id: ""
	I0719 19:37:45.900806   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.900816   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:45.900824   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:45.900886   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:45.932767   68995 cri.go:89] found id: ""
	I0719 19:37:45.932796   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.932806   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:45.932811   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:45.932872   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:45.967785   68995 cri.go:89] found id: ""
	I0719 19:37:45.967814   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.967822   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:45.967839   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:45.967852   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:46.034598   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:46.034623   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:46.034642   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:46.124237   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:46.124286   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:46.164874   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:46.164904   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:46.216679   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:46.216713   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:45.251369   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:47.251447   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:43.483160   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:45.983286   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:47.983452   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:46.732579   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:49.232449   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:51.232677   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:48.730444   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:48.743209   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:48.743283   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:48.780940   68995 cri.go:89] found id: ""
	I0719 19:37:48.780974   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.780982   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:48.780987   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:48.781037   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:48.814186   68995 cri.go:89] found id: ""
	I0719 19:37:48.814213   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.814221   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:48.814230   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:48.814310   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:48.845013   68995 cri.go:89] found id: ""
	I0719 19:37:48.845042   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.845056   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:48.845065   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:48.845139   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:48.878777   68995 cri.go:89] found id: ""
	I0719 19:37:48.878799   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.878807   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:48.878812   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:48.878858   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:48.908926   68995 cri.go:89] found id: ""
	I0719 19:37:48.908954   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.908965   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:48.908972   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:48.909032   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:48.942442   68995 cri.go:89] found id: ""
	I0719 19:37:48.942473   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.942483   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:48.942491   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:48.942554   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:48.974213   68995 cri.go:89] found id: ""
	I0719 19:37:48.974240   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.974249   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:48.974254   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:48.974308   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:49.010041   68995 cri.go:89] found id: ""
	I0719 19:37:49.010067   68995 logs.go:276] 0 containers: []
	W0719 19:37:49.010076   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:49.010085   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:49.010098   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:49.060387   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:49.060419   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:49.073748   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:49.073773   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:49.139397   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:49.139422   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:49.139439   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:49.216748   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:49.216792   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:51.759657   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:51.771457   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:51.771538   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:51.804841   68995 cri.go:89] found id: ""
	I0719 19:37:51.804870   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.804881   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:51.804886   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:51.804937   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:51.838235   68995 cri.go:89] found id: ""
	I0719 19:37:51.838264   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.838274   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:51.838282   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:51.838363   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:51.885664   68995 cri.go:89] found id: ""
	I0719 19:37:51.885689   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.885699   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:51.885706   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:51.885769   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:51.917883   68995 cri.go:89] found id: ""
	I0719 19:37:51.917910   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.917920   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:51.917927   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:51.917990   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:51.949856   68995 cri.go:89] found id: ""
	I0719 19:37:51.949889   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.949898   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:51.949905   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:51.949965   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:51.984016   68995 cri.go:89] found id: ""
	I0719 19:37:51.984060   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.984072   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:51.984081   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:51.984134   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:49.752557   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:52.251462   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:50.483240   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:52.484284   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:53.731744   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:55.733040   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:52.020853   68995 cri.go:89] found id: ""
	I0719 19:37:52.020883   68995 logs.go:276] 0 containers: []
	W0719 19:37:52.020895   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:52.020903   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:52.020958   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:52.054285   68995 cri.go:89] found id: ""
	I0719 19:37:52.054310   68995 logs.go:276] 0 containers: []
	W0719 19:37:52.054321   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:52.054329   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:52.054341   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:52.091083   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:52.091108   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:52.143946   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:52.143981   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:52.158573   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:52.158600   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:52.232029   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:52.232052   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:52.232065   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:54.811805   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:54.825343   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:54.825404   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:54.859915   68995 cri.go:89] found id: ""
	I0719 19:37:54.859943   68995 logs.go:276] 0 containers: []
	W0719 19:37:54.859953   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:54.859961   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:54.860022   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:54.892199   68995 cri.go:89] found id: ""
	I0719 19:37:54.892234   68995 logs.go:276] 0 containers: []
	W0719 19:37:54.892245   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:54.892252   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:54.892316   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:54.925714   68995 cri.go:89] found id: ""
	I0719 19:37:54.925746   68995 logs.go:276] 0 containers: []
	W0719 19:37:54.925757   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:54.925765   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:54.925833   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:54.958677   68995 cri.go:89] found id: ""
	I0719 19:37:54.958717   68995 logs.go:276] 0 containers: []
	W0719 19:37:54.958729   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:54.958736   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:54.958803   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:54.994945   68995 cri.go:89] found id: ""
	I0719 19:37:54.994976   68995 logs.go:276] 0 containers: []
	W0719 19:37:54.994986   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:54.994994   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:54.995073   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:55.028149   68995 cri.go:89] found id: ""
	I0719 19:37:55.028175   68995 logs.go:276] 0 containers: []
	W0719 19:37:55.028185   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:55.028194   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:55.028255   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:55.062698   68995 cri.go:89] found id: ""
	I0719 19:37:55.062731   68995 logs.go:276] 0 containers: []
	W0719 19:37:55.062744   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:55.062753   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:55.062812   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:55.095864   68995 cri.go:89] found id: ""
	I0719 19:37:55.095892   68995 logs.go:276] 0 containers: []
	W0719 19:37:55.095903   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:55.095913   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:55.095928   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:55.152201   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:55.152235   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:55.165503   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:55.165528   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:55.232405   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:55.232429   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:55.232444   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:55.315444   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:55.315496   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:54.752510   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:57.251379   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:54.984369   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:57.482226   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:58.231419   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:00.733424   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:57.852984   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:57.865718   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:57.865784   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:57.898531   68995 cri.go:89] found id: ""
	I0719 19:37:57.898557   68995 logs.go:276] 0 containers: []
	W0719 19:37:57.898565   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:57.898570   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:57.898625   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:57.933746   68995 cri.go:89] found id: ""
	I0719 19:37:57.933774   68995 logs.go:276] 0 containers: []
	W0719 19:37:57.933784   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:57.933791   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:57.933851   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:57.974429   68995 cri.go:89] found id: ""
	I0719 19:37:57.974466   68995 logs.go:276] 0 containers: []
	W0719 19:37:57.974479   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:57.974488   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:57.974560   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:58.011209   68995 cri.go:89] found id: ""
	I0719 19:37:58.011233   68995 logs.go:276] 0 containers: []
	W0719 19:37:58.011243   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:58.011249   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:58.011313   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:58.049309   68995 cri.go:89] found id: ""
	I0719 19:37:58.049337   68995 logs.go:276] 0 containers: []
	W0719 19:37:58.049349   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:58.049357   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:58.049419   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:58.080873   68995 cri.go:89] found id: ""
	I0719 19:37:58.080902   68995 logs.go:276] 0 containers: []
	W0719 19:37:58.080913   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:58.080920   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:58.080979   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:58.113661   68995 cri.go:89] found id: ""
	I0719 19:37:58.113689   68995 logs.go:276] 0 containers: []
	W0719 19:37:58.113698   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:58.113706   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:58.113845   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:58.147431   68995 cri.go:89] found id: ""
	I0719 19:37:58.147459   68995 logs.go:276] 0 containers: []
	W0719 19:37:58.147469   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:58.147480   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:58.147495   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:58.160170   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:58.160195   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:58.222818   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:58.222858   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:58.222874   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:58.304895   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:58.304929   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:58.354462   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:58.354492   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:00.905865   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:00.918105   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:00.918180   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:00.952085   68995 cri.go:89] found id: ""
	I0719 19:38:00.952111   68995 logs.go:276] 0 containers: []
	W0719 19:38:00.952122   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:00.952130   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:00.952192   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:00.985966   68995 cri.go:89] found id: ""
	I0719 19:38:00.985990   68995 logs.go:276] 0 containers: []
	W0719 19:38:00.985999   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:00.986011   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:00.986059   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:01.021869   68995 cri.go:89] found id: ""
	I0719 19:38:01.021901   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.021910   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:01.021915   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:01.021961   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:01.055821   68995 cri.go:89] found id: ""
	I0719 19:38:01.055867   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.055878   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:01.055886   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:01.055950   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:01.090922   68995 cri.go:89] found id: ""
	I0719 19:38:01.090945   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.090952   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:01.090958   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:01.091007   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:01.122779   68995 cri.go:89] found id: ""
	I0719 19:38:01.122808   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.122824   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:01.122831   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:01.122896   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:01.159436   68995 cri.go:89] found id: ""
	I0719 19:38:01.159464   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.159473   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:01.159479   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:01.159531   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:01.191500   68995 cri.go:89] found id: ""
	I0719 19:38:01.191524   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.191532   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:01.191541   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:01.191551   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:01.227143   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:01.227173   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:01.278336   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:01.278363   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:01.291658   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:01.291687   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:01.357320   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:01.357346   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:01.357362   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:59.251599   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:01.252565   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:59.482653   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:01.483865   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:03.232078   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:05.731641   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:03.938679   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:03.951235   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:03.951295   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:03.987844   68995 cri.go:89] found id: ""
	I0719 19:38:03.987875   68995 logs.go:276] 0 containers: []
	W0719 19:38:03.987887   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:03.987894   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:03.987959   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:04.030628   68995 cri.go:89] found id: ""
	I0719 19:38:04.030654   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.030665   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:04.030672   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:04.030744   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:04.077913   68995 cri.go:89] found id: ""
	I0719 19:38:04.077942   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.077950   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:04.077955   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:04.078022   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:04.130065   68995 cri.go:89] found id: ""
	I0719 19:38:04.130098   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.130109   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:04.130116   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:04.130181   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:04.162061   68995 cri.go:89] found id: ""
	I0719 19:38:04.162089   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.162097   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:04.162103   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:04.162163   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:04.194526   68995 cri.go:89] found id: ""
	I0719 19:38:04.194553   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.194560   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:04.194566   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:04.194613   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:04.226387   68995 cri.go:89] found id: ""
	I0719 19:38:04.226418   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.226429   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:04.226436   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:04.226569   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:04.263346   68995 cri.go:89] found id: ""
	I0719 19:38:04.263375   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.263393   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:04.263404   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:04.263420   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:04.321110   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:04.321140   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:04.334294   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:04.334318   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:04.398556   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:04.398577   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:04.398592   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:04.485108   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:04.485139   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:03.751057   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:05.751335   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:03.983215   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:06.482374   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:08.231511   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:10.232368   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:07.022235   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:07.034595   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:07.034658   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:07.067454   68995 cri.go:89] found id: ""
	I0719 19:38:07.067482   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.067492   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:07.067499   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:07.067556   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:07.101154   68995 cri.go:89] found id: ""
	I0719 19:38:07.101183   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.101192   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:07.101206   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:07.101269   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:07.136084   68995 cri.go:89] found id: ""
	I0719 19:38:07.136110   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.136118   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:07.136123   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:07.136173   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:07.173278   68995 cri.go:89] found id: ""
	I0719 19:38:07.173304   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.173312   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:07.173317   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:07.173380   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:07.209680   68995 cri.go:89] found id: ""
	I0719 19:38:07.209707   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.209719   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:07.209727   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:07.209788   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:07.242425   68995 cri.go:89] found id: ""
	I0719 19:38:07.242453   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.242462   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:07.242471   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:07.242538   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:07.276112   68995 cri.go:89] found id: ""
	I0719 19:38:07.276137   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.276147   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:07.276153   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:07.276216   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:07.309460   68995 cri.go:89] found id: ""
	I0719 19:38:07.309489   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.309500   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:07.309514   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:07.309528   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:07.361611   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:07.361650   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:07.375905   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:07.375942   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:07.441390   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:07.441412   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:07.441426   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:07.522023   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:07.522068   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:10.061774   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:10.074620   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:10.074705   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:10.107260   68995 cri.go:89] found id: ""
	I0719 19:38:10.107284   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.107293   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:10.107298   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:10.107345   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:10.142704   68995 cri.go:89] found id: ""
	I0719 19:38:10.142731   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.142738   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:10.142744   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:10.142793   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:10.174713   68995 cri.go:89] found id: ""
	I0719 19:38:10.174751   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.174762   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:10.174770   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:10.174837   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:10.205687   68995 cri.go:89] found id: ""
	I0719 19:38:10.205711   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.205719   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:10.205724   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:10.205782   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:10.238754   68995 cri.go:89] found id: ""
	I0719 19:38:10.238781   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.238789   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:10.238795   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:10.238844   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:10.271239   68995 cri.go:89] found id: ""
	I0719 19:38:10.271264   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.271274   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:10.271281   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:10.271380   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:10.306944   68995 cri.go:89] found id: ""
	I0719 19:38:10.306968   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.306975   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:10.306980   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:10.307030   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:10.336681   68995 cri.go:89] found id: ""
	I0719 19:38:10.336708   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.336716   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:10.336725   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:10.336736   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:10.376368   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:10.376405   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:10.432676   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:10.432722   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:10.446932   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:10.446965   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:10.514285   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:10.514307   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:10.514335   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:08.251480   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:10.252276   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:12.751674   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:08.482753   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:10.483719   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:12.982523   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:12.732172   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:15.232040   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:13.097845   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:13.110781   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:13.110862   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:13.144732   68995 cri.go:89] found id: ""
	I0719 19:38:13.144760   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.144772   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:13.144779   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:13.144840   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:13.176313   68995 cri.go:89] found id: ""
	I0719 19:38:13.176356   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.176364   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:13.176372   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:13.176437   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:13.208219   68995 cri.go:89] found id: ""
	I0719 19:38:13.208249   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.208260   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:13.208267   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:13.208324   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:13.242977   68995 cri.go:89] found id: ""
	I0719 19:38:13.243004   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.243014   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:13.243022   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:13.243080   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:13.275356   68995 cri.go:89] found id: ""
	I0719 19:38:13.275382   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.275389   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:13.275396   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:13.275443   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:13.310244   68995 cri.go:89] found id: ""
	I0719 19:38:13.310275   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.310283   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:13.310290   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:13.310356   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:13.342626   68995 cri.go:89] found id: ""
	I0719 19:38:13.342652   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.342660   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:13.342667   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:13.342737   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:13.374943   68995 cri.go:89] found id: ""
	I0719 19:38:13.374965   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.374973   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:13.374983   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:13.374993   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:13.429940   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:13.429975   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:13.443636   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:13.443677   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:13.514439   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:13.514468   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:13.514481   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:13.595337   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:13.595371   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:16.132302   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:16.145088   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:16.145148   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:16.176994   68995 cri.go:89] found id: ""
	I0719 19:38:16.177023   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.177034   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:16.177042   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:16.177103   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:16.208730   68995 cri.go:89] found id: ""
	I0719 19:38:16.208760   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.208771   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:16.208779   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:16.208843   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:16.244708   68995 cri.go:89] found id: ""
	I0719 19:38:16.244736   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.244747   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:16.244755   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:16.244816   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:16.279211   68995 cri.go:89] found id: ""
	I0719 19:38:16.279237   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.279245   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:16.279250   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:16.279328   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:16.312856   68995 cri.go:89] found id: ""
	I0719 19:38:16.312891   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.312902   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:16.312912   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:16.312980   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:16.347598   68995 cri.go:89] found id: ""
	I0719 19:38:16.347623   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.347630   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:16.347635   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:16.347695   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:16.381927   68995 cri.go:89] found id: ""
	I0719 19:38:16.381957   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.381966   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:16.381973   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:16.382035   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:16.415020   68995 cri.go:89] found id: ""
	I0719 19:38:16.415053   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.415061   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:16.415071   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:16.415084   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:16.428268   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:16.428304   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:16.498869   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:16.498887   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:16.498902   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:16.581237   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:16.581271   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:16.616727   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:16.616755   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:15.251690   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:17.752442   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:15.483131   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:17.484337   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:17.232857   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:19.233157   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:19.172304   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:19.185206   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:19.185282   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:19.218156   68995 cri.go:89] found id: ""
	I0719 19:38:19.218189   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.218200   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:19.218207   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:19.218277   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:19.251173   68995 cri.go:89] found id: ""
	I0719 19:38:19.251201   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.251211   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:19.251219   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:19.251280   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:19.283537   68995 cri.go:89] found id: ""
	I0719 19:38:19.283566   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.283578   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:19.283586   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:19.283654   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:19.315995   68995 cri.go:89] found id: ""
	I0719 19:38:19.316026   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.316047   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:19.316054   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:19.316130   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:19.348917   68995 cri.go:89] found id: ""
	I0719 19:38:19.348940   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.348949   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:19.348954   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:19.349001   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:19.382174   68995 cri.go:89] found id: ""
	I0719 19:38:19.382206   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.382220   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:19.382228   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:19.382290   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:19.416178   68995 cri.go:89] found id: ""
	I0719 19:38:19.416208   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.416217   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:19.416222   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:19.416268   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:19.449333   68995 cri.go:89] found id: ""
	I0719 19:38:19.449364   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.449375   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:19.449385   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:19.449397   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:19.503673   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:19.503708   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:19.517815   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:19.517847   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:19.583811   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:19.583849   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:19.583865   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:19.658742   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:19.658780   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:20.252451   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:22.752922   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:19.982854   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:21.984114   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:21.732342   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:24.231957   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:26.232440   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:22.197792   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:22.211464   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:22.211526   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:22.248860   68995 cri.go:89] found id: ""
	I0719 19:38:22.248891   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.248899   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:22.248905   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:22.248967   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:22.293173   68995 cri.go:89] found id: ""
	I0719 19:38:22.293198   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.293206   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:22.293212   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:22.293268   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:22.328826   68995 cri.go:89] found id: ""
	I0719 19:38:22.328857   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.328868   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:22.328875   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:22.328939   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:22.361560   68995 cri.go:89] found id: ""
	I0719 19:38:22.361590   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.361601   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:22.361608   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:22.361668   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:22.400899   68995 cri.go:89] found id: ""
	I0719 19:38:22.400927   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.400935   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:22.400941   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:22.400988   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:22.437159   68995 cri.go:89] found id: ""
	I0719 19:38:22.437185   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.437195   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:22.437202   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:22.437264   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:22.472424   68995 cri.go:89] found id: ""
	I0719 19:38:22.472454   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.472464   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:22.472472   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:22.472532   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:22.507649   68995 cri.go:89] found id: ""
	I0719 19:38:22.507678   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.507688   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:22.507704   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:22.507718   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:22.559616   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:22.559649   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:22.573436   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:22.573468   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:22.643245   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:22.643279   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:22.643319   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:22.721532   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:22.721571   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:25.264435   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:25.277324   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:25.277400   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:25.308328   68995 cri.go:89] found id: ""
	I0719 19:38:25.308359   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.308370   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:25.308377   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:25.308429   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:25.344100   68995 cri.go:89] found id: ""
	I0719 19:38:25.344130   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.344140   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:25.344148   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:25.344207   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:25.379393   68995 cri.go:89] found id: ""
	I0719 19:38:25.379430   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.379440   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:25.379447   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:25.379495   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:25.417344   68995 cri.go:89] found id: ""
	I0719 19:38:25.417369   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.417377   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:25.417383   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:25.417467   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:25.450753   68995 cri.go:89] found id: ""
	I0719 19:38:25.450778   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.450787   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:25.450793   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:25.450845   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:25.483479   68995 cri.go:89] found id: ""
	I0719 19:38:25.483503   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.483512   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:25.483520   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:25.483585   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:25.514954   68995 cri.go:89] found id: ""
	I0719 19:38:25.514978   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.514986   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:25.514992   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:25.515040   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:25.547802   68995 cri.go:89] found id: ""
	I0719 19:38:25.547828   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.547847   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:25.547858   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:25.547872   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:25.599643   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:25.599679   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:25.612587   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:25.612613   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:25.680267   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:25.680283   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:25.680294   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:25.764185   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:25.764215   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:25.253957   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:27.751614   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:24.482926   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:26.982157   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:28.731943   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:30.732225   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:28.309615   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:28.322513   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:28.322590   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:28.354692   68995 cri.go:89] found id: ""
	I0719 19:38:28.354728   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.354738   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:28.354746   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:28.354819   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:28.392144   68995 cri.go:89] found id: ""
	I0719 19:38:28.392170   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.392180   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:28.392187   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:28.392245   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:28.428196   68995 cri.go:89] found id: ""
	I0719 19:38:28.428226   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.428235   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:28.428243   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:28.428303   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:28.460353   68995 cri.go:89] found id: ""
	I0719 19:38:28.460385   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.460396   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:28.460403   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:28.460465   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:28.493143   68995 cri.go:89] found id: ""
	I0719 19:38:28.493168   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.493177   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:28.493184   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:28.493240   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:28.527379   68995 cri.go:89] found id: ""
	I0719 19:38:28.527404   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.527414   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:28.527422   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:28.527477   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:28.561506   68995 cri.go:89] found id: ""
	I0719 19:38:28.561531   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.561547   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:28.561555   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:28.561615   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:28.593832   68995 cri.go:89] found id: ""
	I0719 19:38:28.593859   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.593868   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:28.593878   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:28.593894   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:28.606980   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:28.607011   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:28.680957   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:28.680984   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:28.680997   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:28.755705   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:28.755738   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:28.793362   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:28.793397   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:31.343185   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:31.355903   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:31.355980   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:31.390158   68995 cri.go:89] found id: ""
	I0719 19:38:31.390188   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.390198   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:31.390203   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:31.390256   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:31.422743   68995 cri.go:89] found id: ""
	I0719 19:38:31.422777   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.422788   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:31.422796   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:31.422858   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:31.457886   68995 cri.go:89] found id: ""
	I0719 19:38:31.457912   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.457922   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:31.457927   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:31.457987   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:31.493625   68995 cri.go:89] found id: ""
	I0719 19:38:31.493648   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.493655   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:31.493670   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:31.493724   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:31.525764   68995 cri.go:89] found id: ""
	I0719 19:38:31.525789   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.525797   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:31.525803   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:31.525857   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:31.558453   68995 cri.go:89] found id: ""
	I0719 19:38:31.558478   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.558489   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:31.558499   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:31.558565   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:31.597875   68995 cri.go:89] found id: ""
	I0719 19:38:31.597901   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.597911   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:31.597918   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:31.597979   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:31.631688   68995 cri.go:89] found id: ""
	I0719 19:38:31.631712   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.631722   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:31.631730   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:31.631744   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:31.696991   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:31.697017   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:31.697032   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:31.782668   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:31.782712   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:31.822052   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:31.822091   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:31.876818   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:31.876857   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:30.250702   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:32.251104   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:29.482806   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:31.483225   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:33.232825   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:35.731586   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:34.390680   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:34.403662   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:34.403718   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:34.437627   68995 cri.go:89] found id: ""
	I0719 19:38:34.437653   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.437661   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:34.437668   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:34.437722   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:34.470906   68995 cri.go:89] found id: ""
	I0719 19:38:34.470935   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.470947   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:34.470955   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:34.471019   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:34.504424   68995 cri.go:89] found id: ""
	I0719 19:38:34.504452   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.504464   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:34.504472   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:34.504530   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:34.538610   68995 cri.go:89] found id: ""
	I0719 19:38:34.538638   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.538649   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:34.538657   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:34.538728   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:34.571473   68995 cri.go:89] found id: ""
	I0719 19:38:34.571500   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.571511   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:34.571518   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:34.571592   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:34.608465   68995 cri.go:89] found id: ""
	I0719 19:38:34.608499   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.608509   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:34.608516   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:34.608581   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:34.650110   68995 cri.go:89] found id: ""
	I0719 19:38:34.650140   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.650151   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:34.650158   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:34.650220   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:34.683386   68995 cri.go:89] found id: ""
	I0719 19:38:34.683409   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.683416   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:34.683427   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:34.683443   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:34.736264   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:34.736294   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:34.751293   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:34.751338   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:34.818817   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:34.818838   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:34.818854   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:34.899383   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:34.899419   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:34.252210   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:36.754343   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:33.984014   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:35.984811   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:37.732623   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:40.231802   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:37.438455   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:37.451494   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:37.451563   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:37.485908   68995 cri.go:89] found id: ""
	I0719 19:38:37.485934   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.485943   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:37.485950   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:37.486006   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:37.519156   68995 cri.go:89] found id: ""
	I0719 19:38:37.519183   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.519193   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:37.519200   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:37.519257   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:37.561498   68995 cri.go:89] found id: ""
	I0719 19:38:37.561530   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.561541   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:37.561548   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:37.561608   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:37.598971   68995 cri.go:89] found id: ""
	I0719 19:38:37.599002   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.599012   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:37.599019   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:37.599075   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:37.641257   68995 cri.go:89] found id: ""
	I0719 19:38:37.641304   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.641315   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:37.641323   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:37.641376   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:37.676572   68995 cri.go:89] found id: ""
	I0719 19:38:37.676596   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.676605   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:37.676610   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:37.676671   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:37.712105   68995 cri.go:89] found id: ""
	I0719 19:38:37.712131   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.712139   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:37.712145   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:37.712201   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:37.747850   68995 cri.go:89] found id: ""
	I0719 19:38:37.747879   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.747889   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:37.747899   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:37.747912   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:37.800904   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:37.800939   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:37.814264   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:37.814293   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:37.891872   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:37.891897   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:37.891913   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:37.967997   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:37.968033   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:40.511969   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:40.525191   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:40.525259   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:40.559619   68995 cri.go:89] found id: ""
	I0719 19:38:40.559651   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.559663   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:40.559671   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:40.559730   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:40.592775   68995 cri.go:89] found id: ""
	I0719 19:38:40.592804   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.592812   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:40.592817   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:40.592873   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:40.625524   68995 cri.go:89] found id: ""
	I0719 19:38:40.625546   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.625553   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:40.625558   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:40.625624   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:40.662117   68995 cri.go:89] found id: ""
	I0719 19:38:40.662151   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.662163   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:40.662171   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:40.662231   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:40.694835   68995 cri.go:89] found id: ""
	I0719 19:38:40.694868   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.694877   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:40.694885   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:40.694944   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:40.726879   68995 cri.go:89] found id: ""
	I0719 19:38:40.726905   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.726913   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:40.726919   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:40.726975   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:40.764663   68995 cri.go:89] found id: ""
	I0719 19:38:40.764688   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.764700   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:40.764707   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:40.764752   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:40.796684   68995 cri.go:89] found id: ""
	I0719 19:38:40.796711   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.796718   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:40.796726   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:40.796736   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:40.834549   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:40.834588   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:40.886521   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:40.886561   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:40.900162   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:40.900199   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:40.969263   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:40.969285   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:40.969319   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:39.250513   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:41.251724   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:38.482887   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:40.484413   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:42.982430   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:42.232190   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:44.232617   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:43.545375   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:43.559091   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:43.559174   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:43.591901   68995 cri.go:89] found id: ""
	I0719 19:38:43.591929   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.591939   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:43.591946   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:43.592009   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:43.656005   68995 cri.go:89] found id: ""
	I0719 19:38:43.656035   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.656046   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:43.656055   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:43.656126   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:43.690049   68995 cri.go:89] found id: ""
	I0719 19:38:43.690073   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.690085   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:43.690092   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:43.690150   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:43.723449   68995 cri.go:89] found id: ""
	I0719 19:38:43.723475   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.723483   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:43.723490   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:43.723565   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:43.759683   68995 cri.go:89] found id: ""
	I0719 19:38:43.759710   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.759718   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:43.759724   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:43.759781   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:43.795127   68995 cri.go:89] found id: ""
	I0719 19:38:43.795154   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.795162   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:43.795179   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:43.795225   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:43.832493   68995 cri.go:89] found id: ""
	I0719 19:38:43.832522   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.832530   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:43.832536   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:43.832588   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:43.868651   68995 cri.go:89] found id: ""
	I0719 19:38:43.868686   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.868697   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:43.868709   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:43.868724   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:43.908054   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:43.908096   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:43.960508   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:43.960546   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:43.973793   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:43.973817   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:44.046185   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:44.046210   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:44.046224   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:46.628410   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:46.640786   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:46.640853   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:46.672921   68995 cri.go:89] found id: ""
	I0719 19:38:46.672950   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.672960   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:46.672967   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:46.673030   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:46.706817   68995 cri.go:89] found id: ""
	I0719 19:38:46.706846   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.706857   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:46.706864   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:46.706922   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:46.741130   68995 cri.go:89] found id: ""
	I0719 19:38:46.741151   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.741158   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:46.741163   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:46.741207   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:46.778639   68995 cri.go:89] found id: ""
	I0719 19:38:46.778664   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.778671   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:46.778677   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:46.778724   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:46.812605   68995 cri.go:89] found id: ""
	I0719 19:38:46.812632   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.812648   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:46.812656   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:46.812709   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:46.848459   68995 cri.go:89] found id: ""
	I0719 19:38:46.848490   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.848499   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:46.848506   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:46.848554   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:46.883444   68995 cri.go:89] found id: ""
	I0719 19:38:46.883475   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.883486   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:46.883494   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:46.883553   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:46.916932   68995 cri.go:89] found id: ""
	I0719 19:38:46.916959   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.916974   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:46.916983   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:46.916994   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:46.967243   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:46.967273   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:46.980786   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:46.980814   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 19:38:43.253266   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:45.752678   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:45.482055   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:47.482299   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:46.733205   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:49.232365   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	W0719 19:38:47.050672   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:47.050698   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:47.050713   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:47.136521   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:47.136557   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:49.673673   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:49.687071   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:49.687150   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:49.720268   68995 cri.go:89] found id: ""
	I0719 19:38:49.720296   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.720305   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:49.720313   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:49.720375   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:49.754485   68995 cri.go:89] found id: ""
	I0719 19:38:49.754512   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.754520   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:49.754527   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:49.754595   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:49.786533   68995 cri.go:89] found id: ""
	I0719 19:38:49.786557   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.786566   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:49.786573   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:49.786637   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:49.819719   68995 cri.go:89] found id: ""
	I0719 19:38:49.819749   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.819758   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:49.819764   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:49.819829   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:49.856826   68995 cri.go:89] found id: ""
	I0719 19:38:49.856859   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.856869   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:49.856876   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:49.856942   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:49.891737   68995 cri.go:89] found id: ""
	I0719 19:38:49.891763   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.891774   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:49.891781   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:49.891860   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:49.924660   68995 cri.go:89] found id: ""
	I0719 19:38:49.924690   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.924699   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:49.924704   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:49.924761   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:49.956896   68995 cri.go:89] found id: ""
	I0719 19:38:49.956925   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.956936   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:49.956946   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:49.956959   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:50.012836   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:50.012867   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:50.028269   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:50.028292   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:50.128352   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:50.128380   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:50.128392   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:50.205971   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:50.206004   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:48.252120   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:50.751581   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:49.482679   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:51.984263   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:51.732882   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:54.232300   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:56.235372   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:52.742970   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:52.756222   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:52.756294   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:52.789556   68995 cri.go:89] found id: ""
	I0719 19:38:52.789585   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.789595   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:52.789601   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:52.789660   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:52.820745   68995 cri.go:89] found id: ""
	I0719 19:38:52.820790   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.820803   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:52.820811   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:52.820878   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:52.854667   68995 cri.go:89] found id: ""
	I0719 19:38:52.854698   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.854709   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:52.854716   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:52.854775   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:52.887125   68995 cri.go:89] found id: ""
	I0719 19:38:52.887152   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.887162   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:52.887167   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:52.887215   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:52.920824   68995 cri.go:89] found id: ""
	I0719 19:38:52.920861   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.920872   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:52.920878   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:52.920941   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:52.952819   68995 cri.go:89] found id: ""
	I0719 19:38:52.952848   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.952856   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:52.952861   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:52.952915   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:52.985508   68995 cri.go:89] found id: ""
	I0719 19:38:52.985535   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.985545   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:52.985553   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:52.985617   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:53.018633   68995 cri.go:89] found id: ""
	I0719 19:38:53.018660   68995 logs.go:276] 0 containers: []
	W0719 19:38:53.018669   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:53.018678   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:53.018690   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:53.071496   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:53.071533   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:53.084685   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:53.084709   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:53.146579   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:53.146603   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:53.146617   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:53.227521   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:53.227552   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:55.768276   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:55.782991   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:55.783075   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:55.825184   68995 cri.go:89] found id: ""
	I0719 19:38:55.825206   68995 logs.go:276] 0 containers: []
	W0719 19:38:55.825214   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:55.825220   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:55.825282   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:55.872997   68995 cri.go:89] found id: ""
	I0719 19:38:55.873024   68995 logs.go:276] 0 containers: []
	W0719 19:38:55.873033   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:55.873041   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:55.873106   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:55.911478   68995 cri.go:89] found id: ""
	I0719 19:38:55.911510   68995 logs.go:276] 0 containers: []
	W0719 19:38:55.911521   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:55.911530   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:55.911654   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:55.943930   68995 cri.go:89] found id: ""
	I0719 19:38:55.943960   68995 logs.go:276] 0 containers: []
	W0719 19:38:55.943971   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:55.943978   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:55.944046   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:55.977743   68995 cri.go:89] found id: ""
	I0719 19:38:55.977771   68995 logs.go:276] 0 containers: []
	W0719 19:38:55.977781   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:55.977788   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:55.977855   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:56.014436   68995 cri.go:89] found id: ""
	I0719 19:38:56.014467   68995 logs.go:276] 0 containers: []
	W0719 19:38:56.014479   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:56.014486   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:56.014561   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:56.047209   68995 cri.go:89] found id: ""
	I0719 19:38:56.047244   68995 logs.go:276] 0 containers: []
	W0719 19:38:56.047255   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:56.047263   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:56.047348   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:56.079632   68995 cri.go:89] found id: ""
	I0719 19:38:56.079655   68995 logs.go:276] 0 containers: []
	W0719 19:38:56.079663   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:56.079671   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:56.079685   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:56.153319   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:56.153343   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:56.153356   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:56.248187   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:56.248228   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:56.291329   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:56.291354   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:56.346462   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:56.346499   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:53.251881   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:55.751678   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:54.484732   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:56.982643   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:58.733025   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:00.731479   68617 pod_ready.go:81] duration metric: took 4m0.005795927s for pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace to be "Ready" ...
	E0719 19:39:00.731502   68617 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0719 19:39:00.731509   68617 pod_ready.go:38] duration metric: took 4m4.040942986s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:39:00.731523   68617 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:39:00.731547   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:39:00.731600   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:39:00.786510   68617 cri.go:89] found id: "82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5"
	I0719 19:39:00.786531   68617 cri.go:89] found id: ""
	I0719 19:39:00.786539   68617 logs.go:276] 1 containers: [82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5]
	I0719 19:39:00.786595   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:00.791234   68617 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:39:00.791294   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:39:00.826465   68617 cri.go:89] found id: "46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c"
	I0719 19:39:00.826492   68617 cri.go:89] found id: ""
	I0719 19:39:00.826503   68617 logs.go:276] 1 containers: [46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c]
	I0719 19:39:00.826562   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:00.830418   68617 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:39:00.830481   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:39:00.865831   68617 cri.go:89] found id: "9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb"
	I0719 19:39:00.865851   68617 cri.go:89] found id: ""
	I0719 19:39:00.865859   68617 logs.go:276] 1 containers: [9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb]
	I0719 19:39:00.865903   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:00.870162   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:39:00.870209   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:39:00.912742   68617 cri.go:89] found id: "560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e"
	I0719 19:39:00.912766   68617 cri.go:89] found id: ""
	I0719 19:39:00.912775   68617 logs.go:276] 1 containers: [560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e]
	I0719 19:39:00.912829   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:00.916725   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:39:00.916777   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:39:00.951824   68617 cri.go:89] found id: "4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e"
	I0719 19:39:00.951861   68617 cri.go:89] found id: ""
	I0719 19:39:00.951871   68617 logs.go:276] 1 containers: [4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e]
	I0719 19:39:00.951920   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:00.955637   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:39:00.955702   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:39:00.990611   68617 cri.go:89] found id: "4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498"
	I0719 19:39:00.990637   68617 cri.go:89] found id: ""
	I0719 19:39:00.990645   68617 logs.go:276] 1 containers: [4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498]
	I0719 19:39:00.990690   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:00.994506   68617 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:39:00.994568   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:39:01.030959   68617 cri.go:89] found id: ""
	I0719 19:39:01.030991   68617 logs.go:276] 0 containers: []
	W0719 19:39:01.031001   68617 logs.go:278] No container was found matching "kindnet"
	I0719 19:39:01.031007   68617 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 19:39:01.031069   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 19:39:01.068897   68617 cri.go:89] found id: "aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040"
	I0719 19:39:01.068928   68617 cri.go:89] found id: "f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91"
	I0719 19:39:01.068934   68617 cri.go:89] found id: ""
	I0719 19:39:01.068945   68617 logs.go:276] 2 containers: [aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040 f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91]
	I0719 19:39:01.069006   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:01.073087   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:01.076976   68617 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:39:01.076994   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:58.861739   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:58.874774   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:58.874842   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:58.913102   68995 cri.go:89] found id: ""
	I0719 19:38:58.913130   68995 logs.go:276] 0 containers: []
	W0719 19:38:58.913138   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:58.913143   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:58.913201   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:58.949367   68995 cri.go:89] found id: ""
	I0719 19:38:58.949397   68995 logs.go:276] 0 containers: []
	W0719 19:38:58.949407   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:58.949415   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:58.949483   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:58.984133   68995 cri.go:89] found id: ""
	I0719 19:38:58.984152   68995 logs.go:276] 0 containers: []
	W0719 19:38:58.984158   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:58.984164   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:58.984224   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:59.016356   68995 cri.go:89] found id: ""
	I0719 19:38:59.016383   68995 logs.go:276] 0 containers: []
	W0719 19:38:59.016393   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:59.016401   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:59.016469   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:59.049981   68995 cri.go:89] found id: ""
	I0719 19:38:59.050009   68995 logs.go:276] 0 containers: []
	W0719 19:38:59.050017   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:59.050027   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:59.050087   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:59.082248   68995 cri.go:89] found id: ""
	I0719 19:38:59.082270   68995 logs.go:276] 0 containers: []
	W0719 19:38:59.082278   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:59.082284   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:59.082358   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:59.115980   68995 cri.go:89] found id: ""
	I0719 19:38:59.116011   68995 logs.go:276] 0 containers: []
	W0719 19:38:59.116021   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:59.116028   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:59.116102   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:59.148379   68995 cri.go:89] found id: ""
	I0719 19:38:59.148404   68995 logs.go:276] 0 containers: []
	W0719 19:38:59.148412   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:59.148420   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:59.148432   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:59.161066   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:59.161099   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:59.225628   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:59.225658   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:59.225669   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:59.311640   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:59.311682   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:59.360964   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:59.360997   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:39:01.915091   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:39:01.928928   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:39:01.928987   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:39:01.962436   68995 cri.go:89] found id: ""
	I0719 19:39:01.962472   68995 logs.go:276] 0 containers: []
	W0719 19:39:01.962483   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:39:01.962490   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:39:01.962559   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:58.251636   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:00.752163   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:58.983433   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:01.482798   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:01.626674   68617 logs.go:123] Gathering logs for kube-apiserver [82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5] ...
	I0719 19:39:01.626731   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5"
	I0719 19:39:01.670756   68617 logs.go:123] Gathering logs for coredns [9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb] ...
	I0719 19:39:01.670787   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb"
	I0719 19:39:01.706541   68617 logs.go:123] Gathering logs for kube-controller-manager [4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498] ...
	I0719 19:39:01.706578   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498"
	I0719 19:39:01.756534   68617 logs.go:123] Gathering logs for etcd [46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c] ...
	I0719 19:39:01.756570   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c"
	I0719 19:39:01.805577   68617 logs.go:123] Gathering logs for kube-scheduler [560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e] ...
	I0719 19:39:01.805613   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e"
	I0719 19:39:01.839258   68617 logs.go:123] Gathering logs for kube-proxy [4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e] ...
	I0719 19:39:01.839288   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e"
	I0719 19:39:01.878371   68617 logs.go:123] Gathering logs for storage-provisioner [aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040] ...
	I0719 19:39:01.878404   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040"
	I0719 19:39:01.917151   68617 logs.go:123] Gathering logs for storage-provisioner [f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91] ...
	I0719 19:39:01.917183   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91"
	I0719 19:39:01.956111   68617 logs.go:123] Gathering logs for kubelet ...
	I0719 19:39:01.956149   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:39:02.023029   68617 logs.go:123] Gathering logs for dmesg ...
	I0719 19:39:02.023081   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:39:02.040889   68617 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:39:02.040920   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 19:39:02.188194   68617 logs.go:123] Gathering logs for container status ...
	I0719 19:39:02.188226   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:39:04.739149   68617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:39:04.757462   68617 api_server.go:72] duration metric: took 4m15.770324413s to wait for apiserver process to appear ...
	I0719 19:39:04.757482   68617 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:39:04.757512   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:39:04.757557   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:39:04.796517   68617 cri.go:89] found id: "82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5"
	I0719 19:39:04.796543   68617 cri.go:89] found id: ""
	I0719 19:39:04.796553   68617 logs.go:276] 1 containers: [82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5]
	I0719 19:39:04.796610   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:04.801976   68617 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:39:04.802046   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:39:04.842403   68617 cri.go:89] found id: "46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c"
	I0719 19:39:04.842425   68617 cri.go:89] found id: ""
	I0719 19:39:04.842434   68617 logs.go:276] 1 containers: [46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c]
	I0719 19:39:04.842487   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:04.846620   68617 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:39:04.846694   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:39:04.885318   68617 cri.go:89] found id: "9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb"
	I0719 19:39:04.885343   68617 cri.go:89] found id: ""
	I0719 19:39:04.885353   68617 logs.go:276] 1 containers: [9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb]
	I0719 19:39:04.885411   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:04.889813   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:39:04.889883   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:39:04.928174   68617 cri.go:89] found id: "560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e"
	I0719 19:39:04.928201   68617 cri.go:89] found id: ""
	I0719 19:39:04.928211   68617 logs.go:276] 1 containers: [560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e]
	I0719 19:39:04.928278   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:04.932724   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:39:04.932807   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:39:04.970800   68617 cri.go:89] found id: "4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e"
	I0719 19:39:04.970823   68617 cri.go:89] found id: ""
	I0719 19:39:04.970830   68617 logs.go:276] 1 containers: [4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e]
	I0719 19:39:04.970882   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:04.975294   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:39:04.975369   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:39:05.012666   68617 cri.go:89] found id: "4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498"
	I0719 19:39:05.012690   68617 cri.go:89] found id: ""
	I0719 19:39:05.012699   68617 logs.go:276] 1 containers: [4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498]
	I0719 19:39:05.012766   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:05.016841   68617 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:39:05.016914   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:39:05.061117   68617 cri.go:89] found id: ""
	I0719 19:39:05.061141   68617 logs.go:276] 0 containers: []
	W0719 19:39:05.061151   68617 logs.go:278] No container was found matching "kindnet"
	I0719 19:39:05.061157   68617 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 19:39:05.061216   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 19:39:05.102934   68617 cri.go:89] found id: "aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040"
	I0719 19:39:05.102959   68617 cri.go:89] found id: "f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91"
	I0719 19:39:05.102965   68617 cri.go:89] found id: ""
	I0719 19:39:05.102974   68617 logs.go:276] 2 containers: [aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040 f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91]
	I0719 19:39:05.103029   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:05.107192   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:05.110970   68617 logs.go:123] Gathering logs for kube-scheduler [560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e] ...
	I0719 19:39:05.110990   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e"
	I0719 19:39:05.152862   68617 logs.go:123] Gathering logs for storage-provisioner [aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040] ...
	I0719 19:39:05.152900   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040"
	I0719 19:39:05.194080   68617 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:39:05.194110   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:39:05.657623   68617 logs.go:123] Gathering logs for container status ...
	I0719 19:39:05.657659   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:39:05.703773   68617 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:39:05.703802   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 19:39:05.810811   68617 logs.go:123] Gathering logs for etcd [46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c] ...
	I0719 19:39:05.810842   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c"
	I0719 19:39:05.859793   68617 logs.go:123] Gathering logs for kube-apiserver [82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5] ...
	I0719 19:39:05.859850   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5"
	I0719 19:39:05.911578   68617 logs.go:123] Gathering logs for coredns [9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb] ...
	I0719 19:39:05.911606   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb"
	I0719 19:39:05.947358   68617 logs.go:123] Gathering logs for kube-proxy [4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e] ...
	I0719 19:39:05.947384   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e"
	I0719 19:39:05.990731   68617 logs.go:123] Gathering logs for kube-controller-manager [4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498] ...
	I0719 19:39:05.990761   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498"
	I0719 19:39:06.041223   68617 logs.go:123] Gathering logs for storage-provisioner [f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91] ...
	I0719 19:39:06.041253   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91"
	I0719 19:39:06.076037   68617 logs.go:123] Gathering logs for kubelet ...
	I0719 19:39:06.076068   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:39:06.131177   68617 logs.go:123] Gathering logs for dmesg ...
	I0719 19:39:06.131213   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:39:02.003045   68995 cri.go:89] found id: ""
	I0719 19:39:02.003078   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.003090   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:39:02.003098   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:39:02.003160   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:39:02.041835   68995 cri.go:89] found id: ""
	I0719 19:39:02.041863   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.041880   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:39:02.041888   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:39:02.041954   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:39:02.080285   68995 cri.go:89] found id: ""
	I0719 19:39:02.080340   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.080353   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:39:02.080361   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:39:02.080431   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:39:02.116225   68995 cri.go:89] found id: ""
	I0719 19:39:02.116308   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.116323   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:39:02.116331   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:39:02.116390   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:39:02.154818   68995 cri.go:89] found id: ""
	I0719 19:39:02.154848   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.154860   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:39:02.154868   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:39:02.154931   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:39:02.190689   68995 cri.go:89] found id: ""
	I0719 19:39:02.190715   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.190727   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:39:02.190741   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:39:02.190810   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:39:02.224155   68995 cri.go:89] found id: ""
	I0719 19:39:02.224185   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.224196   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:39:02.224207   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:39:02.224221   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:39:02.295623   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:39:02.295649   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:39:02.295665   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:39:02.378354   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:39:02.378390   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:39:02.416194   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:39:02.416222   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:39:02.470997   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:39:02.471039   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:39:04.985908   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:39:04.998485   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:39:04.998556   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:39:05.032776   68995 cri.go:89] found id: ""
	I0719 19:39:05.032801   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.032812   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:39:05.032819   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:39:05.032878   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:39:05.066650   68995 cri.go:89] found id: ""
	I0719 19:39:05.066677   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.066686   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:39:05.066693   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:39:05.066771   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:39:05.103689   68995 cri.go:89] found id: ""
	I0719 19:39:05.103712   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.103720   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:39:05.103728   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:39:05.103786   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:39:05.141739   68995 cri.go:89] found id: ""
	I0719 19:39:05.141767   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.141778   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:39:05.141785   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:39:05.141844   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:39:05.178535   68995 cri.go:89] found id: ""
	I0719 19:39:05.178566   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.178576   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:39:05.178582   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:39:05.178640   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:39:05.213876   68995 cri.go:89] found id: ""
	I0719 19:39:05.213904   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.213914   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:39:05.213921   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:39:05.213980   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:39:05.255039   68995 cri.go:89] found id: ""
	I0719 19:39:05.255063   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.255070   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:39:05.255076   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:39:05.255131   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:39:05.288508   68995 cri.go:89] found id: ""
	I0719 19:39:05.288536   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.288547   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:39:05.288558   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:39:05.288572   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:39:05.301079   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:39:05.301109   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:39:05.368109   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:39:05.368128   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:39:05.368140   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:39:05.453326   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:39:05.453360   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:39:05.494780   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:39:05.494814   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:39:03.253971   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:05.751964   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:03.983599   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:06.482412   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:08.646117   68617 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0719 19:39:08.651440   68617 api_server.go:279] https://192.168.61.107:8443/healthz returned 200:
	ok
	I0719 19:39:08.652517   68617 api_server.go:141] control plane version: v1.30.3
	I0719 19:39:08.652538   68617 api_server.go:131] duration metric: took 3.895047319s to wait for apiserver health ...
	I0719 19:39:08.652545   68617 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:39:08.652568   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:39:08.652629   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:39:08.688312   68617 cri.go:89] found id: "82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5"
	I0719 19:39:08.688356   68617 cri.go:89] found id: ""
	I0719 19:39:08.688366   68617 logs.go:276] 1 containers: [82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5]
	I0719 19:39:08.688424   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.692993   68617 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:39:08.693046   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:39:08.736900   68617 cri.go:89] found id: "46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c"
	I0719 19:39:08.736920   68617 cri.go:89] found id: ""
	I0719 19:39:08.736927   68617 logs.go:276] 1 containers: [46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c]
	I0719 19:39:08.736981   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.741409   68617 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:39:08.741473   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:39:08.780867   68617 cri.go:89] found id: "9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb"
	I0719 19:39:08.780885   68617 cri.go:89] found id: ""
	I0719 19:39:08.780892   68617 logs.go:276] 1 containers: [9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb]
	I0719 19:39:08.780935   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.785161   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:39:08.785233   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:39:08.818139   68617 cri.go:89] found id: "560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e"
	I0719 19:39:08.818161   68617 cri.go:89] found id: ""
	I0719 19:39:08.818170   68617 logs.go:276] 1 containers: [560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e]
	I0719 19:39:08.818225   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.822066   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:39:08.822116   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:39:08.855310   68617 cri.go:89] found id: "4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e"
	I0719 19:39:08.855333   68617 cri.go:89] found id: ""
	I0719 19:39:08.855344   68617 logs.go:276] 1 containers: [4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e]
	I0719 19:39:08.855402   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.859647   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:39:08.859716   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:39:08.897948   68617 cri.go:89] found id: "4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498"
	I0719 19:39:08.897973   68617 cri.go:89] found id: ""
	I0719 19:39:08.897981   68617 logs.go:276] 1 containers: [4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498]
	I0719 19:39:08.898030   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.901861   68617 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:39:08.901909   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:39:08.940408   68617 cri.go:89] found id: ""
	I0719 19:39:08.940440   68617 logs.go:276] 0 containers: []
	W0719 19:39:08.940451   68617 logs.go:278] No container was found matching "kindnet"
	I0719 19:39:08.940458   68617 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 19:39:08.940516   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 19:39:08.979077   68617 cri.go:89] found id: "aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040"
	I0719 19:39:08.979104   68617 cri.go:89] found id: "f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91"
	I0719 19:39:08.979109   68617 cri.go:89] found id: ""
	I0719 19:39:08.979118   68617 logs.go:276] 2 containers: [aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040 f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91]
	I0719 19:39:08.979185   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.983947   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.987604   68617 logs.go:123] Gathering logs for coredns [9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb] ...
	I0719 19:39:08.987624   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb"
	I0719 19:39:09.021803   68617 logs.go:123] Gathering logs for kube-proxy [4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e] ...
	I0719 19:39:09.021834   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e"
	I0719 19:39:09.064650   68617 logs.go:123] Gathering logs for kube-controller-manager [4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498] ...
	I0719 19:39:09.064677   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498"
	I0719 19:39:09.120779   68617 logs.go:123] Gathering logs for storage-provisioner [f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91] ...
	I0719 19:39:09.120834   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91"
	I0719 19:39:09.155898   68617 logs.go:123] Gathering logs for kubelet ...
	I0719 19:39:09.155926   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:39:09.210461   68617 logs.go:123] Gathering logs for dmesg ...
	I0719 19:39:09.210490   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:39:09.224154   68617 logs.go:123] Gathering logs for kube-apiserver [82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5] ...
	I0719 19:39:09.224181   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5"
	I0719 19:39:09.266941   68617 logs.go:123] Gathering logs for storage-provisioner [aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040] ...
	I0719 19:39:09.266974   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040"
	I0719 19:39:09.306982   68617 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:39:09.307011   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:39:09.689805   68617 logs.go:123] Gathering logs for container status ...
	I0719 19:39:09.689848   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:39:09.733995   68617 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:39:09.734034   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 19:39:09.842306   68617 logs.go:123] Gathering logs for etcd [46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c] ...
	I0719 19:39:09.842339   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c"
	I0719 19:39:09.889811   68617 logs.go:123] Gathering logs for kube-scheduler [560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e] ...
	I0719 19:39:09.889842   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e"
	I0719 19:39:08.056065   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:39:08.070370   68995 kubeadm.go:597] duration metric: took 4m4.520130366s to restartPrimaryControlPlane
	W0719 19:39:08.070441   68995 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 19:39:08.070473   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 19:39:12.432441   68617 system_pods.go:59] 8 kube-system pods found
	I0719 19:39:12.432467   68617 system_pods.go:61] "coredns-7db6d8ff4d-7h48w" [dd691a13-e571-4148-bef0-ba0552277312] Running
	I0719 19:39:12.432472   68617 system_pods.go:61] "etcd-embed-certs-889918" [68eae3c2-11de-4c70-bc28-1f34e1e868fc] Running
	I0719 19:39:12.432476   68617 system_pods.go:61] "kube-apiserver-embed-certs-889918" [48f50846-bc9e-444e-a569-98d28550dcff] Running
	I0719 19:39:12.432480   68617 system_pods.go:61] "kube-controller-manager-embed-certs-889918" [92e4da69-acef-44a2-9301-726c70294727] Running
	I0719 19:39:12.432484   68617 system_pods.go:61] "kube-proxy-5fcgs" [67ef345b-f2ef-4e18-a55c-91e2c08a37b8] Running
	I0719 19:39:12.432487   68617 system_pods.go:61] "kube-scheduler-embed-certs-889918" [f5d74873-1493-42f7-92a7-5d2b98929025] Running
	I0719 19:39:12.432493   68617 system_pods.go:61] "metrics-server-569cc877fc-sfc85" [fc2a1efe-1728-42d9-bd3d-9eb29af783e9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:39:12.432498   68617 system_pods.go:61] "storage-provisioner" [32274055-6a2d-4cf9-8ae8-3c5f7cdc66db] Running
	I0719 19:39:12.432503   68617 system_pods.go:74] duration metric: took 3.779953026s to wait for pod list to return data ...
	I0719 19:39:12.432509   68617 default_sa.go:34] waiting for default service account to be created ...
	I0719 19:39:12.434809   68617 default_sa.go:45] found service account: "default"
	I0719 19:39:12.434826   68617 default_sa.go:55] duration metric: took 2.3111ms for default service account to be created ...
	I0719 19:39:12.434833   68617 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 19:39:12.440652   68617 system_pods.go:86] 8 kube-system pods found
	I0719 19:39:12.440673   68617 system_pods.go:89] "coredns-7db6d8ff4d-7h48w" [dd691a13-e571-4148-bef0-ba0552277312] Running
	I0719 19:39:12.440678   68617 system_pods.go:89] "etcd-embed-certs-889918" [68eae3c2-11de-4c70-bc28-1f34e1e868fc] Running
	I0719 19:39:12.440683   68617 system_pods.go:89] "kube-apiserver-embed-certs-889918" [48f50846-bc9e-444e-a569-98d28550dcff] Running
	I0719 19:39:12.440689   68617 system_pods.go:89] "kube-controller-manager-embed-certs-889918" [92e4da69-acef-44a2-9301-726c70294727] Running
	I0719 19:39:12.440696   68617 system_pods.go:89] "kube-proxy-5fcgs" [67ef345b-f2ef-4e18-a55c-91e2c08a37b8] Running
	I0719 19:39:12.440707   68617 system_pods.go:89] "kube-scheduler-embed-certs-889918" [f5d74873-1493-42f7-92a7-5d2b98929025] Running
	I0719 19:39:12.440717   68617 system_pods.go:89] "metrics-server-569cc877fc-sfc85" [fc2a1efe-1728-42d9-bd3d-9eb29af783e9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:39:12.440728   68617 system_pods.go:89] "storage-provisioner" [32274055-6a2d-4cf9-8ae8-3c5f7cdc66db] Running
	I0719 19:39:12.440737   68617 system_pods.go:126] duration metric: took 5.899051ms to wait for k8s-apps to be running ...
	I0719 19:39:12.440748   68617 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 19:39:12.440789   68617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:39:12.459186   68617 system_svc.go:56] duration metric: took 18.42941ms WaitForService to wait for kubelet
	I0719 19:39:12.459214   68617 kubeadm.go:582] duration metric: took 4m23.472080305s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 19:39:12.459237   68617 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:39:12.462522   68617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:39:12.462542   68617 node_conditions.go:123] node cpu capacity is 2
	I0719 19:39:12.462552   68617 node_conditions.go:105] duration metric: took 3.31009ms to run NodePressure ...
	I0719 19:39:12.462568   68617 start.go:241] waiting for startup goroutines ...
	I0719 19:39:12.462577   68617 start.go:246] waiting for cluster config update ...
	I0719 19:39:12.462590   68617 start.go:255] writing updated cluster config ...
	I0719 19:39:12.462907   68617 ssh_runner.go:195] Run: rm -f paused
	I0719 19:39:12.529076   68617 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 19:39:12.531440   68617 out.go:177] * Done! kubectl is now configured to use "embed-certs-889918" cluster and "default" namespace by default
	I0719 19:39:08.251071   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:10.251424   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:12.252078   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:12.404491   68995 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.333989512s)
	I0719 19:39:12.404566   68995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:39:12.418839   68995 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:39:12.430297   68995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:39:12.440268   68995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:39:12.440287   68995 kubeadm.go:157] found existing configuration files:
	
	I0719 19:39:12.440335   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:39:12.449116   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:39:12.449184   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:39:12.460572   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:39:12.471305   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:39:12.471362   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:39:12.482167   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:39:12.493823   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:39:12.493905   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:39:12.505544   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:39:12.517178   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:39:12.517240   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:39:12.529271   68995 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 19:39:12.620678   68995 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0719 19:39:12.620745   68995 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 19:39:12.758666   68995 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 19:39:12.758855   68995 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 19:39:12.759023   68995 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 19:39:12.939168   68995 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 19:39:08.482667   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:10.983631   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:12.941111   68995 out.go:204]   - Generating certificates and keys ...
	I0719 19:39:12.941226   68995 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 19:39:12.941319   68995 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 19:39:12.941423   68995 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 19:39:12.941537   68995 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 19:39:12.941651   68995 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 19:39:12.941754   68995 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 19:39:12.941851   68995 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 19:39:12.941935   68995 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 19:39:12.942048   68995 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 19:39:12.942156   68995 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 19:39:12.942238   68995 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 19:39:12.942336   68995 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 19:39:13.024175   68995 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 19:39:13.148395   68995 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 19:39:13.342824   68995 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 19:39:13.440755   68995 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 19:39:13.455869   68995 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 19:39:13.456830   68995 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 19:39:13.456907   68995 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 19:39:13.600063   68995 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 19:39:13.602239   68995 out.go:204]   - Booting up control plane ...
	I0719 19:39:13.602381   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 19:39:13.608875   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 19:39:13.609791   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 19:39:13.610664   68995 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 19:39:13.612681   68995 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 19:39:14.252185   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:16.252337   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:13.484347   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:15.983299   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:17.983517   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:18.751634   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:21.252050   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:20.482791   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:21.477050   68536 pod_ready.go:81] duration metric: took 4m0.000605179s for pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace to be "Ready" ...
	E0719 19:39:21.477076   68536 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace to be "Ready" (will not retry!)
	I0719 19:39:21.477094   68536 pod_ready.go:38] duration metric: took 4m14.545379745s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:39:21.477123   68536 kubeadm.go:597] duration metric: took 5m1.629449662s to restartPrimaryControlPlane
	W0719 19:39:21.477193   68536 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 19:39:21.477215   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 19:39:23.753986   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:26.251761   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:28.252580   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:30.755898   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:33.251879   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:35.750822   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:37.752455   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:40.251762   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:42.752421   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:45.251253   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:47.752059   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:50.253519   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:51.745969   68019 pod_ready.go:81] duration metric: took 4m0.000948888s for pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace to be "Ready" ...
	E0719 19:39:51.746010   68019 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0719 19:39:51.746030   68019 pod_ready.go:38] duration metric: took 4m12.537161917s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:39:51.746056   68019 kubeadm.go:597] duration metric: took 4m19.596455979s to restartPrimaryControlPlane
	W0719 19:39:51.746106   68019 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 19:39:51.746131   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 19:39:52.520817   68536 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.043581976s)
	I0719 19:39:52.520887   68536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:39:52.536043   68536 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:39:52.545993   68536 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:39:52.555648   68536 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:39:52.555669   68536 kubeadm.go:157] found existing configuration files:
	
	I0719 19:39:52.555711   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0719 19:39:52.564602   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:39:52.564712   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:39:52.573899   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0719 19:39:52.582537   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:39:52.582588   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:39:52.591303   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0719 19:39:52.600820   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:39:52.600868   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:39:52.610312   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0719 19:39:52.618924   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:39:52.618970   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:39:52.627840   68536 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 19:39:52.683442   68536 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0719 19:39:52.683530   68536 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 19:39:52.826305   68536 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 19:39:52.826452   68536 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 19:39:52.826620   68536 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 19:39:53.042595   68536 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 19:39:53.044702   68536 out.go:204]   - Generating certificates and keys ...
	I0719 19:39:53.044784   68536 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 19:39:53.044870   68536 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 19:39:53.044980   68536 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 19:39:53.045077   68536 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 19:39:53.045156   68536 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 19:39:53.045204   68536 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 19:39:53.045263   68536 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 19:39:53.045340   68536 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 19:39:53.045435   68536 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 19:39:53.045541   68536 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 19:39:53.045606   68536 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 19:39:53.045764   68536 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 19:39:53.222737   68536 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 19:39:53.489072   68536 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 19:39:53.740066   68536 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 19:39:53.822505   68536 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 19:39:53.971335   68536 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 19:39:53.971975   68536 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 19:39:53.976245   68536 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 19:39:53.614108   68995 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0719 19:39:53.614700   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:39:53.614946   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:39:53.978300   68536 out.go:204]   - Booting up control plane ...
	I0719 19:39:53.978437   68536 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 19:39:53.978565   68536 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 19:39:53.978663   68536 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 19:39:53.996201   68536 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 19:39:53.997047   68536 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 19:39:53.997148   68536 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 19:39:54.130113   68536 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 19:39:54.130245   68536 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 19:39:55.132570   68536 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002320915s
	I0719 19:39:55.132686   68536 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 19:39:59.636140   68536 kubeadm.go:310] [api-check] The API server is healthy after 4.50221298s
	I0719 19:39:59.654383   68536 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 19:39:59.676263   68536 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 19:39:59.700938   68536 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 19:39:59.701195   68536 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-578533 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 19:39:59.712009   68536 kubeadm.go:310] [bootstrap-token] Using token: kew56y.gotqvfalggpgzbky
	I0719 19:39:59.713486   68536 out.go:204]   - Configuring RBAC rules ...
	I0719 19:39:59.713629   68536 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 19:39:59.720605   68536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 19:39:59.726901   68536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 19:39:59.730021   68536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 19:39:59.733042   68536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 19:39:59.736260   68536 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 19:40:00.042122   68536 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 19:40:00.496191   68536 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 19:40:01.043927   68536 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 19:40:01.043969   68536 kubeadm.go:310] 
	I0719 19:40:01.044072   68536 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 19:40:01.044086   68536 kubeadm.go:310] 
	I0719 19:40:01.044173   68536 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 19:40:01.044181   68536 kubeadm.go:310] 
	I0719 19:40:01.044201   68536 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 19:40:01.044250   68536 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 19:40:01.044339   68536 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 19:40:01.044356   68536 kubeadm.go:310] 
	I0719 19:40:01.044493   68536 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 19:40:01.044513   68536 kubeadm.go:310] 
	I0719 19:40:01.044584   68536 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 19:40:01.044594   68536 kubeadm.go:310] 
	I0719 19:40:01.044670   68536 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 19:40:01.044774   68536 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 19:40:01.044874   68536 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 19:40:01.044888   68536 kubeadm.go:310] 
	I0719 19:40:01.044975   68536 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 19:40:01.045047   68536 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 19:40:01.045058   68536 kubeadm.go:310] 
	I0719 19:40:01.045169   68536 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token kew56y.gotqvfalggpgzbky \
	I0719 19:40:01.045331   68536 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 \
	I0719 19:40:01.045367   68536 kubeadm.go:310] 	--control-plane 
	I0719 19:40:01.045372   68536 kubeadm.go:310] 
	I0719 19:40:01.045496   68536 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 19:40:01.045506   68536 kubeadm.go:310] 
	I0719 19:40:01.045607   68536 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token kew56y.gotqvfalggpgzbky \
	I0719 19:40:01.045766   68536 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 
	I0719 19:40:01.045950   68536 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 19:40:01.045974   68536 cni.go:84] Creating CNI manager for ""
	I0719 19:40:01.045983   68536 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:40:01.047577   68536 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 19:39:58.615471   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:39:58.615687   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:40:01.048902   68536 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 19:40:01.061165   68536 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 19:40:01.083745   68536 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 19:40:01.083880   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:01.083893   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-578533 minikube.k8s.io/updated_at=2024_07_19T19_40_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221 minikube.k8s.io/name=default-k8s-diff-port-578533 minikube.k8s.io/primary=true
	I0719 19:40:01.115489   68536 ops.go:34] apiserver oom_adj: -16
	I0719 19:40:01.288776   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:01.789023   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:02.289623   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:02.789147   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:03.289112   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:03.789167   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:04.289092   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:04.788811   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:05.289732   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:05.789486   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:06.289062   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:06.789641   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:07.289713   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:07.789100   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:08.289752   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:08.616294   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:40:08.616549   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:40:08.789060   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:09.288783   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:09.789011   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:10.289587   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:10.789028   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:11.288947   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:11.789756   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:12.288867   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:12.789023   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:13.289682   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:13.789104   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:13.872731   68536 kubeadm.go:1113] duration metric: took 12.788952534s to wait for elevateKubeSystemPrivileges
	I0719 19:40:13.872760   68536 kubeadm.go:394] duration metric: took 5m54.069849946s to StartCluster
	I0719 19:40:13.872777   68536 settings.go:142] acquiring lock: {Name:mkcf95271f2ac91a55c2f4ae0f8a0ccaa24777be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:40:13.872848   68536 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:40:13.874598   68536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/kubeconfig: {Name:mk0bbb5f0de0e816a151363ad4427bbf9de52f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:40:13.874875   68536 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.104 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 19:40:13.875049   68536 config.go:182] Loaded profile config "default-k8s-diff-port-578533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:40:13.875047   68536 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 19:40:13.875107   68536 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-578533"
	I0719 19:40:13.875116   68536 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-578533"
	I0719 19:40:13.875140   68536 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-578533"
	I0719 19:40:13.875142   68536 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-578533"
	I0719 19:40:13.875198   68536 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-578533"
	W0719 19:40:13.875211   68536 addons.go:243] addon metrics-server should already be in state true
	I0719 19:40:13.875141   68536 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-578533"
	W0719 19:40:13.875237   68536 addons.go:243] addon storage-provisioner should already be in state true
	I0719 19:40:13.875252   68536 host.go:66] Checking if "default-k8s-diff-port-578533" exists ...
	I0719 19:40:13.875272   68536 host.go:66] Checking if "default-k8s-diff-port-578533" exists ...
	I0719 19:40:13.875610   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.875620   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.875641   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.875648   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.875760   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.875808   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.876607   68536 out.go:177] * Verifying Kubernetes components...
	I0719 19:40:13.878074   68536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:40:13.891861   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44441
	I0719 19:40:13.891871   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33215
	I0719 19:40:13.892275   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.892348   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.892756   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.892776   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.892861   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.892880   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.893123   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.893188   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.893600   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.893620   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.893708   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.893741   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.895892   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33737
	I0719 19:40:13.896304   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.896825   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.896849   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.897241   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.897455   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetState
	I0719 19:40:13.901471   68536 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-578533"
	W0719 19:40:13.901492   68536 addons.go:243] addon default-storageclass should already be in state true
	I0719 19:40:13.901520   68536 host.go:66] Checking if "default-k8s-diff-port-578533" exists ...
	I0719 19:40:13.901886   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.901916   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.909214   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44955
	I0719 19:40:13.909602   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.909706   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41381
	I0719 19:40:13.910014   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.910034   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.910088   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.910307   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.910493   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetState
	I0719 19:40:13.910626   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.910647   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.911481   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.911755   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetState
	I0719 19:40:13.913629   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:40:13.914010   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:40:13.915656   68536 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0719 19:40:13.915658   68536 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:40:13.916749   68536 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 19:40:13.916766   68536 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 19:40:13.916796   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:40:13.917023   68536 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:40:13.917042   68536 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 19:40:13.917059   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:40:13.920356   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:40:13.921022   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:40:13.921051   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:40:13.921100   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:40:13.921342   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:40:13.921505   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:40:13.921690   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:40:13.921838   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:40:13.922025   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:40:13.922047   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:40:13.922369   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:40:13.922515   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:40:13.922679   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:40:13.922815   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:40:13.926929   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43701
	I0719 19:40:13.927323   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.927751   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.927771   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.928249   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.928833   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.928858   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.944683   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33103
	I0719 19:40:13.945111   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.945633   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.945660   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.946161   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.946386   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetState
	I0719 19:40:13.948243   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:40:13.948454   68536 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 19:40:13.948469   68536 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 19:40:13.948490   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:40:13.951289   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:40:13.951582   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:40:13.951617   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:40:13.951803   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:40:13.952029   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:40:13.952201   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:40:13.952391   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:40:14.070828   68536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:40:14.091145   68536 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-578533" to be "Ready" ...
	I0719 19:40:14.110080   68536 node_ready.go:49] node "default-k8s-diff-port-578533" has status "Ready":"True"
	I0719 19:40:14.110107   68536 node_ready.go:38] duration metric: took 18.918971ms for node "default-k8s-diff-port-578533" to be "Ready" ...
	I0719 19:40:14.110119   68536 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:40:14.119802   68536 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.126113   68536 pod_ready.go:92] pod "etcd-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:14.126142   68536 pod_ready.go:81] duration metric: took 6.313891ms for pod "etcd-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.126155   68536 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.141789   68536 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:14.141819   68536 pod_ready.go:81] duration metric: took 15.656544ms for pod "kube-apiserver-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.141834   68536 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.156519   68536 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:14.156551   68536 pod_ready.go:81] duration metric: took 14.707971ms for pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.156565   68536 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xd6jl" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.204678   68536 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 19:40:14.204699   68536 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0719 19:40:14.243288   68536 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 19:40:14.245353   68536 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 19:40:14.245379   68536 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 19:40:14.251097   68536 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:40:14.282349   68536 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 19:40:14.282380   68536 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 19:40:14.324904   68536 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 19:40:14.538669   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:14.538706   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:14.539085   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Closing plugin on server side
	I0719 19:40:14.539097   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:14.539112   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:14.539127   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:14.539141   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:14.539429   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:14.539450   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:14.539487   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Closing plugin on server side
	I0719 19:40:14.544896   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:14.544916   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:14.545172   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Closing plugin on server side
	I0719 19:40:14.545177   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:14.545191   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:15.063896   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:15.063917   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:15.064241   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:15.064253   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Closing plugin on server side
	I0719 19:40:15.064259   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:15.064276   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:15.064284   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:15.064494   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:15.064517   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:15.658117   68536 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.333159998s)
	I0719 19:40:15.658174   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:15.658189   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:15.658470   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:15.658491   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:15.658502   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:15.658512   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:15.658736   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:15.658753   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:15.658764   68536 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-578533"
	I0719 19:40:15.661136   68536 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0719 19:40:15.662344   68536 addons.go:510] duration metric: took 1.787285769s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0719 19:40:15.679066   68536 pod_ready.go:92] pod "kube-proxy-xd6jl" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:15.679088   68536 pod_ready.go:81] duration metric: took 1.52251532s for pod "kube-proxy-xd6jl" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:15.679114   68536 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:15.695199   68536 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:15.695221   68536 pod_ready.go:81] duration metric: took 16.099506ms for pod "kube-scheduler-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:15.695229   68536 pod_ready.go:38] duration metric: took 1.585098776s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:40:15.695241   68536 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:40:15.695297   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:40:15.746226   68536 api_server.go:72] duration metric: took 1.871305968s to wait for apiserver process to appear ...
	I0719 19:40:15.746254   68536 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:40:15.746277   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:40:15.761521   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 200:
	ok
	I0719 19:40:15.762619   68536 api_server.go:141] control plane version: v1.30.3
	I0719 19:40:15.762646   68536 api_server.go:131] duration metric: took 16.383582ms to wait for apiserver health ...
	I0719 19:40:15.762656   68536 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:40:15.900579   68536 system_pods.go:59] 9 kube-system pods found
	I0719 19:40:15.900627   68536 system_pods.go:61] "coredns-7db6d8ff4d-6qndw" [2b80cbbf-e1de-449d-affb-90c7e626e11a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 19:40:15.900639   68536 system_pods.go:61] "coredns-7db6d8ff4d-gmljj" [e3c23a75-7c98-4704-b159-465a46e71deb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 19:40:15.900648   68536 system_pods.go:61] "etcd-default-k8s-diff-port-578533" [2667fb3b-07c7-4feb-9c47-aec61d610319] Running
	I0719 19:40:15.900657   68536 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-578533" [54d5ed69-0572-4d75-b2c2-868a2d48f048] Running
	I0719 19:40:15.900664   68536 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-578533" [ba65a51a-7ce6-4ba9-ad87-b2763c49732f] Running
	I0719 19:40:15.900669   68536 system_pods.go:61] "kube-proxy-xd6jl" [699a18fd-c6e6-4c85-9799-0630bb7932b9] Running
	I0719 19:40:15.900675   68536 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-578533" [28fe9666-6e36-479a-9908-0ed40fbe9655] Running
	I0719 19:40:15.900687   68536 system_pods.go:61] "metrics-server-569cc877fc-kz2bz" [dc109926-e072-47e1-944c-a9614741fdaa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:40:15.900701   68536 system_pods.go:61] "storage-provisioner" [732e14b5-3120-491f-be0f-2d2ffcd7a799] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 19:40:15.900711   68536 system_pods.go:74] duration metric: took 138.04766ms to wait for pod list to return data ...
	I0719 19:40:15.900726   68536 default_sa.go:34] waiting for default service account to be created ...
	I0719 19:40:16.094864   68536 default_sa.go:45] found service account: "default"
	I0719 19:40:16.094889   68536 default_sa.go:55] duration metric: took 194.152947ms for default service account to be created ...
	I0719 19:40:16.094899   68536 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 19:40:16.297477   68536 system_pods.go:86] 9 kube-system pods found
	I0719 19:40:16.297508   68536 system_pods.go:89] "coredns-7db6d8ff4d-6qndw" [2b80cbbf-e1de-449d-affb-90c7e626e11a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 19:40:16.297515   68536 system_pods.go:89] "coredns-7db6d8ff4d-gmljj" [e3c23a75-7c98-4704-b159-465a46e71deb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 19:40:16.297521   68536 system_pods.go:89] "etcd-default-k8s-diff-port-578533" [2667fb3b-07c7-4feb-9c47-aec61d610319] Running
	I0719 19:40:16.297526   68536 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-578533" [54d5ed69-0572-4d75-b2c2-868a2d48f048] Running
	I0719 19:40:16.297531   68536 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-578533" [ba65a51a-7ce6-4ba9-ad87-b2763c49732f] Running
	I0719 19:40:16.297535   68536 system_pods.go:89] "kube-proxy-xd6jl" [699a18fd-c6e6-4c85-9799-0630bb7932b9] Running
	I0719 19:40:16.297539   68536 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-578533" [28fe9666-6e36-479a-9908-0ed40fbe9655] Running
	I0719 19:40:16.297543   68536 system_pods.go:89] "metrics-server-569cc877fc-kz2bz" [dc109926-e072-47e1-944c-a9614741fdaa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:40:16.297550   68536 system_pods.go:89] "storage-provisioner" [732e14b5-3120-491f-be0f-2d2ffcd7a799] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 19:40:16.297572   68536 retry.go:31] will retry after 294.572454ms: missing components: kube-dns
	I0719 19:40:16.599274   68536 system_pods.go:86] 9 kube-system pods found
	I0719 19:40:16.599306   68536 system_pods.go:89] "coredns-7db6d8ff4d-6qndw" [2b80cbbf-e1de-449d-affb-90c7e626e11a] Running
	I0719 19:40:16.599315   68536 system_pods.go:89] "coredns-7db6d8ff4d-gmljj" [e3c23a75-7c98-4704-b159-465a46e71deb] Running
	I0719 19:40:16.599343   68536 system_pods.go:89] "etcd-default-k8s-diff-port-578533" [2667fb3b-07c7-4feb-9c47-aec61d610319] Running
	I0719 19:40:16.599351   68536 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-578533" [54d5ed69-0572-4d75-b2c2-868a2d48f048] Running
	I0719 19:40:16.599358   68536 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-578533" [ba65a51a-7ce6-4ba9-ad87-b2763c49732f] Running
	I0719 19:40:16.599365   68536 system_pods.go:89] "kube-proxy-xd6jl" [699a18fd-c6e6-4c85-9799-0630bb7932b9] Running
	I0719 19:40:16.599371   68536 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-578533" [28fe9666-6e36-479a-9908-0ed40fbe9655] Running
	I0719 19:40:16.599386   68536 system_pods.go:89] "metrics-server-569cc877fc-kz2bz" [dc109926-e072-47e1-944c-a9614741fdaa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:40:16.599392   68536 system_pods.go:89] "storage-provisioner" [732e14b5-3120-491f-be0f-2d2ffcd7a799] Running
	I0719 19:40:16.599408   68536 system_pods.go:126] duration metric: took 504.501449ms to wait for k8s-apps to be running ...
	I0719 19:40:16.599418   68536 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 19:40:16.599461   68536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:40:16.613644   68536 system_svc.go:56] duration metric: took 14.216133ms WaitForService to wait for kubelet
	I0719 19:40:16.613673   68536 kubeadm.go:582] duration metric: took 2.73876141s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 19:40:16.613691   68536 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:40:16.695470   68536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:40:16.695494   68536 node_conditions.go:123] node cpu capacity is 2
	I0719 19:40:16.695503   68536 node_conditions.go:105] duration metric: took 81.807505ms to run NodePressure ...
	I0719 19:40:16.695513   68536 start.go:241] waiting for startup goroutines ...
	I0719 19:40:16.695523   68536 start.go:246] waiting for cluster config update ...
	I0719 19:40:16.695533   68536 start.go:255] writing updated cluster config ...
	I0719 19:40:16.695817   68536 ssh_runner.go:195] Run: rm -f paused
	I0719 19:40:16.743733   68536 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 19:40:16.746063   68536 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-578533" cluster and "default" namespace by default
	I0719 19:40:17.805890   68019 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.059736517s)
	I0719 19:40:17.805973   68019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:40:17.821097   68019 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:40:17.831114   68019 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:40:17.840896   68019 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:40:17.840926   68019 kubeadm.go:157] found existing configuration files:
	
	I0719 19:40:17.840974   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:40:17.850003   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:40:17.850059   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:40:17.859304   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:40:17.868663   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:40:17.868711   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:40:17.877527   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:40:17.886554   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:40:17.886608   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:40:17.895713   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:40:17.904287   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:40:17.904352   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:40:17.913115   68019 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 19:40:17.956895   68019 kubeadm.go:310] W0719 19:40:17.928980    2926 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0719 19:40:17.958338   68019 kubeadm.go:310] W0719 19:40:17.930488    2926 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0719 19:40:18.073705   68019 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 19:40:26.630439   68019 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0719 19:40:26.630525   68019 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 19:40:26.630649   68019 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 19:40:26.630765   68019 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 19:40:26.630913   68019 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0719 19:40:26.631023   68019 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 19:40:26.632660   68019 out.go:204]   - Generating certificates and keys ...
	I0719 19:40:26.632745   68019 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 19:40:26.632833   68019 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 19:40:26.632906   68019 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 19:40:26.632975   68019 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 19:40:26.633093   68019 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 19:40:26.633184   68019 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 19:40:26.633270   68019 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 19:40:26.633378   68019 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 19:40:26.633474   68019 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 19:40:26.633572   68019 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 19:40:26.633627   68019 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 19:40:26.633705   68019 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 19:40:26.633780   68019 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 19:40:26.633868   68019 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 19:40:26.633942   68019 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 19:40:26.634024   68019 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 19:40:26.634119   68019 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 19:40:26.634250   68019 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 19:40:26.634354   68019 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 19:40:26.635957   68019 out.go:204]   - Booting up control plane ...
	I0719 19:40:26.636062   68019 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 19:40:26.636163   68019 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 19:40:26.636259   68019 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 19:40:26.636381   68019 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 19:40:26.636505   68019 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 19:40:26.636570   68019 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 19:40:26.636727   68019 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 19:40:26.636813   68019 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 19:40:26.636888   68019 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002206871s
	I0719 19:40:26.637000   68019 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 19:40:26.637088   68019 kubeadm.go:310] [api-check] The API server is healthy after 5.002825608s
	I0719 19:40:26.637239   68019 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 19:40:26.637387   68019 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 19:40:26.637470   68019 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 19:40:26.637681   68019 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-971041 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 19:40:26.637737   68019 kubeadm.go:310] [bootstrap-token] Using token: r7dsc0.erulzfzieqtgxnlo
	I0719 19:40:26.638982   68019 out.go:204]   - Configuring RBAC rules ...
	I0719 19:40:26.639079   68019 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 19:40:26.639165   68019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 19:40:26.639306   68019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 19:40:26.639427   68019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 19:40:26.639522   68019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 19:40:26.639599   68019 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 19:40:26.639692   68019 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 19:40:26.639731   68019 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 19:40:26.639770   68019 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 19:40:26.639779   68019 kubeadm.go:310] 
	I0719 19:40:26.639859   68019 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 19:40:26.639868   68019 kubeadm.go:310] 
	I0719 19:40:26.639935   68019 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 19:40:26.639941   68019 kubeadm.go:310] 
	I0719 19:40:26.639974   68019 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 19:40:26.640070   68019 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 19:40:26.640145   68019 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 19:40:26.640161   68019 kubeadm.go:310] 
	I0719 19:40:26.640242   68019 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 19:40:26.640249   68019 kubeadm.go:310] 
	I0719 19:40:26.640288   68019 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 19:40:26.640303   68019 kubeadm.go:310] 
	I0719 19:40:26.640386   68019 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 19:40:26.640476   68019 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 19:40:26.640572   68019 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 19:40:26.640582   68019 kubeadm.go:310] 
	I0719 19:40:26.640655   68019 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 19:40:26.640733   68019 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 19:40:26.640745   68019 kubeadm.go:310] 
	I0719 19:40:26.640827   68019 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token r7dsc0.erulzfzieqtgxnlo \
	I0719 19:40:26.640965   68019 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 \
	I0719 19:40:26.641007   68019 kubeadm.go:310] 	--control-plane 
	I0719 19:40:26.641015   68019 kubeadm.go:310] 
	I0719 19:40:26.641105   68019 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 19:40:26.641117   68019 kubeadm.go:310] 
	I0719 19:40:26.641187   68019 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token r7dsc0.erulzfzieqtgxnlo \
	I0719 19:40:26.641282   68019 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 
	I0719 19:40:26.641293   68019 cni.go:84] Creating CNI manager for ""
	I0719 19:40:26.641302   68019 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:40:26.642788   68019 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 19:40:26.643983   68019 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 19:40:26.655231   68019 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 19:40:26.673487   68019 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 19:40:26.673555   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:26.673573   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-971041 minikube.k8s.io/updated_at=2024_07_19T19_40_26_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221 minikube.k8s.io/name=no-preload-971041 minikube.k8s.io/primary=true
	I0719 19:40:26.702916   68019 ops.go:34] apiserver oom_adj: -16
	I0719 19:40:26.871642   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:27.372636   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:27.871695   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:28.372395   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:28.871975   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:29.371869   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:29.872364   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:30.372618   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:30.871974   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:31.372607   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:31.484697   68019 kubeadm.go:1113] duration metric: took 4.811203988s to wait for elevateKubeSystemPrivileges
	I0719 19:40:31.484735   68019 kubeadm.go:394] duration metric: took 4m59.38336524s to StartCluster
	I0719 19:40:31.484758   68019 settings.go:142] acquiring lock: {Name:mkcf95271f2ac91a55c2f4ae0f8a0ccaa24777be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:40:31.484852   68019 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:40:31.487571   68019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/kubeconfig: {Name:mk0bbb5f0de0e816a151363ad4427bbf9de52f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:40:31.487903   68019 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.119 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 19:40:31.488046   68019 config.go:182] Loaded profile config "no-preload-971041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 19:40:31.488017   68019 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 19:40:31.488091   68019 addons.go:69] Setting storage-provisioner=true in profile "no-preload-971041"
	I0719 19:40:31.488105   68019 addons.go:69] Setting default-storageclass=true in profile "no-preload-971041"
	I0719 19:40:31.488136   68019 addons.go:234] Setting addon storage-provisioner=true in "no-preload-971041"
	I0719 19:40:31.488136   68019 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-971041"
	W0719 19:40:31.488145   68019 addons.go:243] addon storage-provisioner should already be in state true
	I0719 19:40:31.488173   68019 host.go:66] Checking if "no-preload-971041" exists ...
	I0719 19:40:31.488097   68019 addons.go:69] Setting metrics-server=true in profile "no-preload-971041"
	I0719 19:40:31.488220   68019 addons.go:234] Setting addon metrics-server=true in "no-preload-971041"
	W0719 19:40:31.488229   68019 addons.go:243] addon metrics-server should already be in state true
	I0719 19:40:31.488253   68019 host.go:66] Checking if "no-preload-971041" exists ...
	I0719 19:40:31.488560   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.488566   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.488568   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.488588   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.488589   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.488590   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.489333   68019 out.go:177] * Verifying Kubernetes components...
	I0719 19:40:31.490709   68019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:40:31.505073   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35949
	I0719 19:40:31.505121   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36213
	I0719 19:40:31.505243   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34987
	I0719 19:40:31.505554   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.505575   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.505651   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.506136   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.506161   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.506138   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.506177   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.506369   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.506390   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.506521   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.506582   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.506762   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.506950   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetState
	I0719 19:40:31.507074   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.507101   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.507097   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.507134   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.510086   68019 addons.go:234] Setting addon default-storageclass=true in "no-preload-971041"
	W0719 19:40:31.510104   68019 addons.go:243] addon default-storageclass should already be in state true
	I0719 19:40:31.510154   68019 host.go:66] Checking if "no-preload-971041" exists ...
	I0719 19:40:31.510697   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.510727   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.523120   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36047
	I0719 19:40:31.523463   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45285
	I0719 19:40:31.523899   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.523950   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.524463   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.524479   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.524580   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.524590   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.524931   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.524973   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.525131   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetState
	I0719 19:40:31.525164   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetState
	I0719 19:40:31.526929   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:40:31.527498   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:40:31.527764   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42975
	I0719 19:40:31.528531   68019 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0719 19:40:31.528562   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.529323   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.529340   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.529389   68019 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:40:28.617451   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:40:28.617640   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:40:31.529719   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.530357   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.530369   68019 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 19:40:31.530383   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.530386   68019 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 19:40:31.530405   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:40:31.531478   68019 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:40:31.531494   68019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 19:40:31.531509   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:40:31.533843   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:40:31.534251   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:40:31.534279   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:40:31.534542   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:40:31.534701   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:40:31.534827   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:40:31.534968   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:40:31.538822   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:40:31.539296   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:40:31.539468   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:40:31.539598   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:40:31.539745   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:40:31.539894   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:40:31.540051   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:40:31.547930   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40325
	I0719 19:40:31.548415   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.548833   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.548852   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.549151   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.549309   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetState
	I0719 19:40:31.550633   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:40:31.550855   68019 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 19:40:31.550869   68019 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 19:40:31.550881   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:40:31.553528   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:40:31.554109   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:40:31.554134   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:40:31.554298   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:40:31.554650   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:40:31.554842   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:40:31.554962   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:40:31.716869   68019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:40:31.740991   68019 node_ready.go:35] waiting up to 6m0s for node "no-preload-971041" to be "Ready" ...
	I0719 19:40:31.749672   68019 node_ready.go:49] node "no-preload-971041" has status "Ready":"True"
	I0719 19:40:31.749700   68019 node_ready.go:38] duration metric: took 8.681441ms for node "no-preload-971041" to be "Ready" ...
	I0719 19:40:31.749713   68019 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:40:31.755037   68019 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-7kdqd" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:31.865867   68019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:40:31.866793   68019 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 19:40:31.866807   68019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0719 19:40:31.876604   68019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 19:40:31.890924   68019 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 19:40:31.890947   68019 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 19:40:31.930299   68019 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 19:40:31.930326   68019 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 19:40:31.982650   68019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 19:40:32.488961   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:32.488991   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:32.489025   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:32.489051   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:32.489355   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:32.489372   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:32.489382   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:32.489390   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:32.489398   68019 main.go:141] libmachine: (no-preload-971041) DBG | Closing plugin on server side
	I0719 19:40:32.489424   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:32.489431   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:32.489439   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:32.489446   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:32.489619   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:32.489639   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:32.489676   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:32.489689   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:32.489690   68019 main.go:141] libmachine: (no-preload-971041) DBG | Closing plugin on server side
	I0719 19:40:32.506595   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:32.506613   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:32.506974   68019 main.go:141] libmachine: (no-preload-971041) DBG | Closing plugin on server side
	I0719 19:40:32.506985   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:32.507001   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:33.019915   68019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.037215511s)
	I0719 19:40:33.019974   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:33.019991   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:33.020393   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:33.020409   68019 main.go:141] libmachine: (no-preload-971041) DBG | Closing plugin on server side
	I0719 19:40:33.020415   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:33.020433   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:33.020442   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:33.020806   68019 main.go:141] libmachine: (no-preload-971041) DBG | Closing plugin on server side
	I0719 19:40:33.020860   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:33.020875   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:33.020893   68019 addons.go:475] Verifying addon metrics-server=true in "no-preload-971041"
	I0719 19:40:33.023369   68019 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0719 19:40:33.024402   68019 addons.go:510] duration metric: took 1.536389874s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0719 19:40:33.763711   68019 pod_ready.go:102] pod "coredns-5cfdc65f69-7kdqd" in "kube-system" namespace has status "Ready":"False"
	I0719 19:40:36.262760   68019 pod_ready.go:102] pod "coredns-5cfdc65f69-7kdqd" in "kube-system" namespace has status "Ready":"False"
	I0719 19:40:38.761973   68019 pod_ready.go:102] pod "coredns-5cfdc65f69-7kdqd" in "kube-system" namespace has status "Ready":"False"
	I0719 19:40:39.261665   68019 pod_ready.go:92] pod "coredns-5cfdc65f69-7kdqd" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:39.261686   68019 pod_ready.go:81] duration metric: took 7.506626451s for pod "coredns-5cfdc65f69-7kdqd" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.261696   68019 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-d5xr6" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.268952   68019 pod_ready.go:92] pod "coredns-5cfdc65f69-d5xr6" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:39.268968   68019 pod_ready.go:81] duration metric: took 7.267035ms for pod "coredns-5cfdc65f69-d5xr6" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.268976   68019 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.274245   68019 pod_ready.go:92] pod "etcd-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:39.274263   68019 pod_ready.go:81] duration metric: took 5.279919ms for pod "etcd-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.274273   68019 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.278452   68019 pod_ready.go:92] pod "kube-apiserver-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:39.278469   68019 pod_ready.go:81] duration metric: took 4.189107ms for pod "kube-apiserver-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.278479   68019 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:40.285769   68019 pod_ready.go:92] pod "kube-controller-manager-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:40.285797   68019 pod_ready.go:81] duration metric: took 1.007309472s for pod "kube-controller-manager-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:40.285814   68019 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-445wh" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:40.459001   68019 pod_ready.go:92] pod "kube-proxy-445wh" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:40.459024   68019 pod_ready.go:81] duration metric: took 173.203584ms for pod "kube-proxy-445wh" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:40.459033   68019 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:40.858861   68019 pod_ready.go:92] pod "kube-scheduler-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:40.858890   68019 pod_ready.go:81] duration metric: took 399.849812ms for pod "kube-scheduler-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:40.858901   68019 pod_ready.go:38] duration metric: took 9.109176659s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:40:40.858919   68019 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:40:40.858980   68019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:40:40.873763   68019 api_server.go:72] duration metric: took 9.385790935s to wait for apiserver process to appear ...
	I0719 19:40:40.873791   68019 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:40:40.873813   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:40:40.878709   68019 api_server.go:279] https://192.168.50.119:8443/healthz returned 200:
	ok
	I0719 19:40:40.879656   68019 api_server.go:141] control plane version: v1.31.0-beta.0
	I0719 19:40:40.879677   68019 api_server.go:131] duration metric: took 5.8793ms to wait for apiserver health ...
	I0719 19:40:40.879686   68019 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:40:41.062525   68019 system_pods.go:59] 9 kube-system pods found
	I0719 19:40:41.062553   68019 system_pods.go:61] "coredns-5cfdc65f69-7kdqd" [1b0eeec5-f993-4d90-a3e0-5254bcc39f64] Running
	I0719 19:40:41.062561   68019 system_pods.go:61] "coredns-5cfdc65f69-d5xr6" [5f5a96c3-84ef-429c-b31d-472fcc247862] Running
	I0719 19:40:41.062565   68019 system_pods.go:61] "etcd-no-preload-971041" [8852dc62-d441-43c0-97e5-6fb333ee3d26] Running
	I0719 19:40:41.062569   68019 system_pods.go:61] "kube-apiserver-no-preload-971041" [083b50e1-e9fa-4e4b-a684-a6d461c7b45e] Running
	I0719 19:40:41.062573   68019 system_pods.go:61] "kube-controller-manager-no-preload-971041" [45ee5801-3b8a-4072-a4c6-716525a06d87] Running
	I0719 19:40:41.062576   68019 system_pods.go:61] "kube-proxy-445wh" [468d052a-a8d2-47f6-9054-813dd3ee33ee] Running
	I0719 19:40:41.062579   68019 system_pods.go:61] "kube-scheduler-no-preload-971041" [3989f412-bfc8-41c2-a9df-ef2b72f95f37] Running
	I0719 19:40:41.062584   68019 system_pods.go:61] "metrics-server-78fcd8795b-r6hdb" [13d26205-03d4-4719-bfa0-199a2e23b52a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:40:41.062588   68019 system_pods.go:61] "storage-provisioner" [fedb9d1a-139b-4627-8efa-64dd1481d8d4] Running
	I0719 19:40:41.062595   68019 system_pods.go:74] duration metric: took 182.903201ms to wait for pod list to return data ...
	I0719 19:40:41.062601   68019 default_sa.go:34] waiting for default service account to be created ...
	I0719 19:40:41.259408   68019 default_sa.go:45] found service account: "default"
	I0719 19:40:41.259432   68019 default_sa.go:55] duration metric: took 196.824864ms for default service account to be created ...
	I0719 19:40:41.259440   68019 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 19:40:41.462209   68019 system_pods.go:86] 9 kube-system pods found
	I0719 19:40:41.462243   68019 system_pods.go:89] "coredns-5cfdc65f69-7kdqd" [1b0eeec5-f993-4d90-a3e0-5254bcc39f64] Running
	I0719 19:40:41.462251   68019 system_pods.go:89] "coredns-5cfdc65f69-d5xr6" [5f5a96c3-84ef-429c-b31d-472fcc247862] Running
	I0719 19:40:41.462258   68019 system_pods.go:89] "etcd-no-preload-971041" [8852dc62-d441-43c0-97e5-6fb333ee3d26] Running
	I0719 19:40:41.462264   68019 system_pods.go:89] "kube-apiserver-no-preload-971041" [083b50e1-e9fa-4e4b-a684-a6d461c7b45e] Running
	I0719 19:40:41.462271   68019 system_pods.go:89] "kube-controller-manager-no-preload-971041" [45ee5801-3b8a-4072-a4c6-716525a06d87] Running
	I0719 19:40:41.462276   68019 system_pods.go:89] "kube-proxy-445wh" [468d052a-a8d2-47f6-9054-813dd3ee33ee] Running
	I0719 19:40:41.462282   68019 system_pods.go:89] "kube-scheduler-no-preload-971041" [3989f412-bfc8-41c2-a9df-ef2b72f95f37] Running
	I0719 19:40:41.462291   68019 system_pods.go:89] "metrics-server-78fcd8795b-r6hdb" [13d26205-03d4-4719-bfa0-199a2e23b52a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:40:41.462295   68019 system_pods.go:89] "storage-provisioner" [fedb9d1a-139b-4627-8efa-64dd1481d8d4] Running
	I0719 19:40:41.462304   68019 system_pods.go:126] duration metric: took 202.857782ms to wait for k8s-apps to be running ...
	I0719 19:40:41.462312   68019 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 19:40:41.462377   68019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:40:41.478453   68019 system_svc.go:56] duration metric: took 16.131704ms WaitForService to wait for kubelet
	I0719 19:40:41.478487   68019 kubeadm.go:582] duration metric: took 9.990516808s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 19:40:41.478510   68019 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:40:41.660126   68019 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:40:41.660154   68019 node_conditions.go:123] node cpu capacity is 2
	I0719 19:40:41.660165   68019 node_conditions.go:105] duration metric: took 181.649686ms to run NodePressure ...
	I0719 19:40:41.660176   68019 start.go:241] waiting for startup goroutines ...
	I0719 19:40:41.660182   68019 start.go:246] waiting for cluster config update ...
	I0719 19:40:41.660192   68019 start.go:255] writing updated cluster config ...
	I0719 19:40:41.660455   68019 ssh_runner.go:195] Run: rm -f paused
	I0719 19:40:41.707917   68019 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0719 19:40:41.710102   68019 out.go:177] * Done! kubectl is now configured to use "no-preload-971041" cluster and "default" namespace by default
	I0719 19:41:08.619807   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:41:08.620131   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:41:08.620145   68995 kubeadm.go:310] 
	I0719 19:41:08.620196   68995 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0719 19:41:08.620252   68995 kubeadm.go:310] 		timed out waiting for the condition
	I0719 19:41:08.620262   68995 kubeadm.go:310] 
	I0719 19:41:08.620302   68995 kubeadm.go:310] 	This error is likely caused by:
	I0719 19:41:08.620401   68995 kubeadm.go:310] 		- The kubelet is not running
	I0719 19:41:08.620615   68995 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0719 19:41:08.620632   68995 kubeadm.go:310] 
	I0719 19:41:08.620765   68995 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0719 19:41:08.620824   68995 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0719 19:41:08.620872   68995 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0719 19:41:08.620882   68995 kubeadm.go:310] 
	I0719 19:41:08.621033   68995 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0719 19:41:08.621168   68995 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0719 19:41:08.621179   68995 kubeadm.go:310] 
	I0719 19:41:08.621349   68995 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0719 19:41:08.621469   68995 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0719 19:41:08.621576   68995 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0719 19:41:08.621664   68995 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0719 19:41:08.621677   68995 kubeadm.go:310] 
	I0719 19:41:08.622038   68995 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 19:41:08.622184   68995 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0719 19:41:08.622310   68995 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0719 19:41:08.622452   68995 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0719 19:41:08.622521   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 19:41:09.080325   68995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:41:09.095325   68995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:41:09.104753   68995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:41:09.104771   68995 kubeadm.go:157] found existing configuration files:
	
	I0719 19:41:09.104818   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:41:09.113315   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:41:09.113375   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:41:09.122025   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:41:09.130333   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:41:09.130390   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:41:09.139159   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:41:09.147867   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:41:09.147913   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:41:09.156426   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:41:09.165249   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:41:09.165299   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:41:09.174271   68995 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 19:41:09.379598   68995 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 19:43:05.534759   68995 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0719 19:43:05.534861   68995 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0719 19:43:05.536588   68995 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0719 19:43:05.536645   68995 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 19:43:05.536728   68995 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 19:43:05.536857   68995 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 19:43:05.536952   68995 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 19:43:05.537043   68995 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 19:43:05.538861   68995 out.go:204]   - Generating certificates and keys ...
	I0719 19:43:05.538966   68995 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 19:43:05.539081   68995 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 19:43:05.539168   68995 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 19:43:05.539245   68995 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 19:43:05.539349   68995 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 19:43:05.539395   68995 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 19:43:05.539453   68995 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 19:43:05.539504   68995 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 19:43:05.539578   68995 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 19:43:05.539668   68995 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 19:43:05.539716   68995 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 19:43:05.539763   68995 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 19:43:05.539812   68995 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 19:43:05.539885   68995 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 19:43:05.539968   68995 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 19:43:05.540022   68995 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 19:43:05.540135   68995 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 19:43:05.540239   68995 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 19:43:05.540291   68995 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 19:43:05.540376   68995 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 19:43:05.542005   68995 out.go:204]   - Booting up control plane ...
	I0719 19:43:05.542089   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 19:43:05.542159   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 19:43:05.542219   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 19:43:05.542290   68995 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 19:43:05.542442   68995 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 19:43:05.542510   68995 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0719 19:43:05.542610   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:43:05.542826   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:43:05.542939   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:43:05.543171   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:43:05.543252   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:43:05.543470   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:43:05.543578   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:43:05.543759   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:43:05.543891   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:43:05.544130   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:43:05.544144   68995 kubeadm.go:310] 
	I0719 19:43:05.544185   68995 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0719 19:43:05.544237   68995 kubeadm.go:310] 		timed out waiting for the condition
	I0719 19:43:05.544252   68995 kubeadm.go:310] 
	I0719 19:43:05.544286   68995 kubeadm.go:310] 	This error is likely caused by:
	I0719 19:43:05.544316   68995 kubeadm.go:310] 		- The kubelet is not running
	I0719 19:43:05.544416   68995 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0719 19:43:05.544423   68995 kubeadm.go:310] 
	I0719 19:43:05.544574   68995 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0719 19:43:05.544628   68995 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0719 19:43:05.544675   68995 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0719 19:43:05.544683   68995 kubeadm.go:310] 
	I0719 19:43:05.544816   68995 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0719 19:43:05.544917   68995 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0719 19:43:05.544926   68995 kubeadm.go:310] 
	I0719 19:43:05.545025   68995 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0719 19:43:05.545099   68995 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0719 19:43:05.545183   68995 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0719 19:43:05.545275   68995 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0719 19:43:05.545312   68995 kubeadm.go:310] 
	I0719 19:43:05.545346   68995 kubeadm.go:394] duration metric: took 8m2.050412966s to StartCluster
	I0719 19:43:05.545390   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:43:05.545442   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:43:05.585066   68995 cri.go:89] found id: ""
	I0719 19:43:05.585101   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.585114   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:43:05.585123   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:43:05.585197   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:43:05.619270   68995 cri.go:89] found id: ""
	I0719 19:43:05.619298   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.619308   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:43:05.619316   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:43:05.619378   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:43:05.652886   68995 cri.go:89] found id: ""
	I0719 19:43:05.652924   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.652935   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:43:05.652943   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:43:05.653004   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:43:05.687430   68995 cri.go:89] found id: ""
	I0719 19:43:05.687464   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.687475   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:43:05.687482   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:43:05.687542   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:43:05.724176   68995 cri.go:89] found id: ""
	I0719 19:43:05.724207   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.724218   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:43:05.724226   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:43:05.724286   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:43:05.757047   68995 cri.go:89] found id: ""
	I0719 19:43:05.757070   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.757077   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:43:05.757083   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:43:05.757131   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:43:05.799705   68995 cri.go:89] found id: ""
	I0719 19:43:05.799733   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.799743   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:43:05.799751   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:43:05.799811   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:43:05.844622   68995 cri.go:89] found id: ""
	I0719 19:43:05.844647   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.844655   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:43:05.844665   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:43:05.844676   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:43:05.962843   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:43:05.962880   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:43:06.010157   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:43:06.010191   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:43:06.060738   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:43:06.060771   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:43:06.075609   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:43:06.075650   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:43:06.155814   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0719 19:43:06.155872   68995 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0719 19:43:06.155906   68995 out.go:239] * 
	W0719 19:43:06.155980   68995 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0719 19:43:06.156013   68995 out.go:239] * 
	W0719 19:43:06.156982   68995 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 19:43:06.160611   68995 out.go:177] 
	W0719 19:43:06.161797   68995 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0719 19:43:06.161854   68995 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0719 19:43:06.161873   68995 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0719 19:43:06.163354   68995 out.go:177] 
	
	
	==> CRI-O <==
	Jul 19 19:49:18 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:49:18.725163239Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721418558725133722,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b0f402a6-1b84-4932-9db2-b032f4522d4d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:49:18 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:49:18.725695617Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=09cfcd60-0c57-4016-a195-04677e8f3170 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:49:18 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:49:18.725773973Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=09cfcd60-0c57-4016-a195-04677e8f3170 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:49:18 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:49:18.726046450Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d1c7e821430f2c3e648dee20b159498d81e180c353242de20e2e6bd1f2a1c2c3,PodSandboxId:828d1e03a8884251e68a9a9d4c4e6258b00a10b348ac5093b0f25adde3763749,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721418015823355118,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 732e14b5-3120-491f-be0f-2d2ffcd7a799,},Annotations:map[string]string{io.kubernetes.container.hash: 91f19603,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee742ac25345f19173e819339a637dea94073ade83bba182ff4ea10c5ee760af,PodSandboxId:48436a33fb598e082507fcb5070e4486133bcad969e81210561c4eca10560d33,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721418015590702250,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6qndw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b80cbbf-e1de-449d-affb-90c7e626e11a,},Annotations:map[string]string{io.kubernetes.container.hash: 27a8a211,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c65de2de8482e19415059ef3be3788d783af6af981d8a04d14179e40bb76334,PodSandboxId:986c786f16e6c5d41f4779317c3902b3533637fee4a239c794aeecd8ddb01a90,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721418015245306711,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gmljj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e3c23a75-7c98-4704-b159-465a46e71deb,},Annotations:map[string]string{io.kubernetes.container.hash: 34f880b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27c5137e81ce4a3a4314ee9abb74c9b80eee504b7940b5261240bff2ecf0ea94,PodSandboxId:48b5b8f58679240b5c96e27a79d0867e4a7a49f35b6780532d40e02a6684a7e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1721418014679272039,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xd6jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 699a18fd-c6e6-4c85-9799-0630bb7932b9,},Annotations:map[string]string{io.kubernetes.container.hash: b9885ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad1f5dc72ccb7b8f8d8357d5c9d23d139367b15fbb8a70a0e82874d09f9d1dd,PodSandboxId:9faabf77253d5b5b1da4a12659ca5c251c72eb04eef845991496e23e4bdeb812,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721417995599739090,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096db0e04661bf40752b9a4898f2bcf9,},Annotations:map[string]string{io.kubernetes.container.hash: fd630f66,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd8e28deb2f51e6c4ed6ca939ab870a348addc9d1173cef06f05da3c3df20a15,PodSandboxId:ebb805e3a4c5100256765687311fbafd8477d9c6303a3287f507489f281519c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721417995590555458,Labels:map[string]string{io.kub
ernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c50c783d6d5435770e83874c37ce2b6,},Annotations:map[string]string{io.kubernetes.container.hash: a8d50415,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec40058eeac298f291cf9203f510fbc71ccdacc8f4d31a5b3fc25f3c4a6d9b4d,PodSandboxId:089d5c0dbd2753b18e533862fb795f58cfb009aaf979c3250a5fe4bbc4b7a8e9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721417995594882175,Labels:map[string]string{io.kuber
netes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2593cfd7c8bd9f8e3f97dcfb50f20c2c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b342b0e80323559300909a4db3e718e4bd038e8b166c454b9a82aedb44323fbe,PodSandboxId:131d57cb0ded26d1dc359e5aa09ecf9de373c1a167ba4014ffe6f723d2a0c1cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721417995498101615,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1619e3139d20f5a1024703bbbd8ddea,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dab364e723e9421448ef1ff573d84fe92df839a75a1133f07bf2a84ebd7950e,PodSandboxId:92eaaaa01d03c5bddeee3a99c87fac476bad631e01eb88e12013113c82a3bbf5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721417684847346396,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c50c783d6d5435770e83874c37ce2b6,},Annotations:map[string]string{io.kubernetes.container.hash: a8d50415,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=09cfcd60-0c57-4016-a195-04677e8f3170 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:49:18 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:49:18.766151089Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dd1ac6e5-93d5-4347-9c40-ccd0094e1be0 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:49:18 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:49:18.766243708Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dd1ac6e5-93d5-4347-9c40-ccd0094e1be0 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:49:18 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:49:18.767295360Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=36f1e17b-8b62-4ede-984e-25e7a1e2ae29 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:49:18 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:49:18.767715266Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721418558767694356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=36f1e17b-8b62-4ede-984e-25e7a1e2ae29 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:49:18 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:49:18.768385968Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8584d0e6-bbbf-483c-90b8-97d3cc1acc60 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:49:18 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:49:18.768463257Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8584d0e6-bbbf-483c-90b8-97d3cc1acc60 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:49:18 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:49:18.768674186Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d1c7e821430f2c3e648dee20b159498d81e180c353242de20e2e6bd1f2a1c2c3,PodSandboxId:828d1e03a8884251e68a9a9d4c4e6258b00a10b348ac5093b0f25adde3763749,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721418015823355118,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 732e14b5-3120-491f-be0f-2d2ffcd7a799,},Annotations:map[string]string{io.kubernetes.container.hash: 91f19603,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee742ac25345f19173e819339a637dea94073ade83bba182ff4ea10c5ee760af,PodSandboxId:48436a33fb598e082507fcb5070e4486133bcad969e81210561c4eca10560d33,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721418015590702250,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6qndw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b80cbbf-e1de-449d-affb-90c7e626e11a,},Annotations:map[string]string{io.kubernetes.container.hash: 27a8a211,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c65de2de8482e19415059ef3be3788d783af6af981d8a04d14179e40bb76334,PodSandboxId:986c786f16e6c5d41f4779317c3902b3533637fee4a239c794aeecd8ddb01a90,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721418015245306711,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gmljj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e3c23a75-7c98-4704-b159-465a46e71deb,},Annotations:map[string]string{io.kubernetes.container.hash: 34f880b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27c5137e81ce4a3a4314ee9abb74c9b80eee504b7940b5261240bff2ecf0ea94,PodSandboxId:48b5b8f58679240b5c96e27a79d0867e4a7a49f35b6780532d40e02a6684a7e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1721418014679272039,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xd6jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 699a18fd-c6e6-4c85-9799-0630bb7932b9,},Annotations:map[string]string{io.kubernetes.container.hash: b9885ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad1f5dc72ccb7b8f8d8357d5c9d23d139367b15fbb8a70a0e82874d09f9d1dd,PodSandboxId:9faabf77253d5b5b1da4a12659ca5c251c72eb04eef845991496e23e4bdeb812,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721417995599739090,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096db0e04661bf40752b9a4898f2bcf9,},Annotations:map[string]string{io.kubernetes.container.hash: fd630f66,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd8e28deb2f51e6c4ed6ca939ab870a348addc9d1173cef06f05da3c3df20a15,PodSandboxId:ebb805e3a4c5100256765687311fbafd8477d9c6303a3287f507489f281519c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721417995590555458,Labels:map[string]string{io.kub
ernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c50c783d6d5435770e83874c37ce2b6,},Annotations:map[string]string{io.kubernetes.container.hash: a8d50415,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec40058eeac298f291cf9203f510fbc71ccdacc8f4d31a5b3fc25f3c4a6d9b4d,PodSandboxId:089d5c0dbd2753b18e533862fb795f58cfb009aaf979c3250a5fe4bbc4b7a8e9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721417995594882175,Labels:map[string]string{io.kuber
netes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2593cfd7c8bd9f8e3f97dcfb50f20c2c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b342b0e80323559300909a4db3e718e4bd038e8b166c454b9a82aedb44323fbe,PodSandboxId:131d57cb0ded26d1dc359e5aa09ecf9de373c1a167ba4014ffe6f723d2a0c1cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721417995498101615,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1619e3139d20f5a1024703bbbd8ddea,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dab364e723e9421448ef1ff573d84fe92df839a75a1133f07bf2a84ebd7950e,PodSandboxId:92eaaaa01d03c5bddeee3a99c87fac476bad631e01eb88e12013113c82a3bbf5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721417684847346396,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c50c783d6d5435770e83874c37ce2b6,},Annotations:map[string]string{io.kubernetes.container.hash: a8d50415,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8584d0e6-bbbf-483c-90b8-97d3cc1acc60 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:49:18 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:49:18.809429365Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b0dea1a8-2b6c-440c-b2a7-2a93d3cf2526 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:49:18 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:49:18.809521960Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b0dea1a8-2b6c-440c-b2a7-2a93d3cf2526 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:49:18 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:49:18.811082502Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6d340f97-e9ff-4f6a-b6d2-234efc17fb99 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:49:18 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:49:18.811505191Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721418558811480568,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6d340f97-e9ff-4f6a-b6d2-234efc17fb99 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:49:18 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:49:18.812146473Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae4d6108-79c4-4205-8caf-7595169f2f6e name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:49:18 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:49:18.812216206Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae4d6108-79c4-4205-8caf-7595169f2f6e name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:49:18 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:49:18.812427379Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d1c7e821430f2c3e648dee20b159498d81e180c353242de20e2e6bd1f2a1c2c3,PodSandboxId:828d1e03a8884251e68a9a9d4c4e6258b00a10b348ac5093b0f25adde3763749,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721418015823355118,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 732e14b5-3120-491f-be0f-2d2ffcd7a799,},Annotations:map[string]string{io.kubernetes.container.hash: 91f19603,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee742ac25345f19173e819339a637dea94073ade83bba182ff4ea10c5ee760af,PodSandboxId:48436a33fb598e082507fcb5070e4486133bcad969e81210561c4eca10560d33,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721418015590702250,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6qndw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b80cbbf-e1de-449d-affb-90c7e626e11a,},Annotations:map[string]string{io.kubernetes.container.hash: 27a8a211,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c65de2de8482e19415059ef3be3788d783af6af981d8a04d14179e40bb76334,PodSandboxId:986c786f16e6c5d41f4779317c3902b3533637fee4a239c794aeecd8ddb01a90,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721418015245306711,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gmljj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e3c23a75-7c98-4704-b159-465a46e71deb,},Annotations:map[string]string{io.kubernetes.container.hash: 34f880b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27c5137e81ce4a3a4314ee9abb74c9b80eee504b7940b5261240bff2ecf0ea94,PodSandboxId:48b5b8f58679240b5c96e27a79d0867e4a7a49f35b6780532d40e02a6684a7e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1721418014679272039,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xd6jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 699a18fd-c6e6-4c85-9799-0630bb7932b9,},Annotations:map[string]string{io.kubernetes.container.hash: b9885ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad1f5dc72ccb7b8f8d8357d5c9d23d139367b15fbb8a70a0e82874d09f9d1dd,PodSandboxId:9faabf77253d5b5b1da4a12659ca5c251c72eb04eef845991496e23e4bdeb812,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721417995599739090,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096db0e04661bf40752b9a4898f2bcf9,},Annotations:map[string]string{io.kubernetes.container.hash: fd630f66,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd8e28deb2f51e6c4ed6ca939ab870a348addc9d1173cef06f05da3c3df20a15,PodSandboxId:ebb805e3a4c5100256765687311fbafd8477d9c6303a3287f507489f281519c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721417995590555458,Labels:map[string]string{io.kub
ernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c50c783d6d5435770e83874c37ce2b6,},Annotations:map[string]string{io.kubernetes.container.hash: a8d50415,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec40058eeac298f291cf9203f510fbc71ccdacc8f4d31a5b3fc25f3c4a6d9b4d,PodSandboxId:089d5c0dbd2753b18e533862fb795f58cfb009aaf979c3250a5fe4bbc4b7a8e9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721417995594882175,Labels:map[string]string{io.kuber
netes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2593cfd7c8bd9f8e3f97dcfb50f20c2c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b342b0e80323559300909a4db3e718e4bd038e8b166c454b9a82aedb44323fbe,PodSandboxId:131d57cb0ded26d1dc359e5aa09ecf9de373c1a167ba4014ffe6f723d2a0c1cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721417995498101615,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1619e3139d20f5a1024703bbbd8ddea,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dab364e723e9421448ef1ff573d84fe92df839a75a1133f07bf2a84ebd7950e,PodSandboxId:92eaaaa01d03c5bddeee3a99c87fac476bad631e01eb88e12013113c82a3bbf5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721417684847346396,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c50c783d6d5435770e83874c37ce2b6,},Annotations:map[string]string{io.kubernetes.container.hash: a8d50415,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ae4d6108-79c4-4205-8caf-7595169f2f6e name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:49:18 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:49:18.847113746Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b9226dd0-de1e-4614-b3bd-6b99f82cc5a1 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:49:18 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:49:18.847218139Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b9226dd0-de1e-4614-b3bd-6b99f82cc5a1 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:49:18 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:49:18.848368774Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4775bbbc-7f15-4d0b-8771-4f3557a78120 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:49:18 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:49:18.848873893Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721418558848846368,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4775bbbc-7f15-4d0b-8771-4f3557a78120 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:49:18 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:49:18.849639103Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=03a1f902-a276-4568-8d94-80d5cfabe4e7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:49:18 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:49:18.849736321Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=03a1f902-a276-4568-8d94-80d5cfabe4e7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:49:18 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:49:18.850074860Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d1c7e821430f2c3e648dee20b159498d81e180c353242de20e2e6bd1f2a1c2c3,PodSandboxId:828d1e03a8884251e68a9a9d4c4e6258b00a10b348ac5093b0f25adde3763749,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721418015823355118,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 732e14b5-3120-491f-be0f-2d2ffcd7a799,},Annotations:map[string]string{io.kubernetes.container.hash: 91f19603,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee742ac25345f19173e819339a637dea94073ade83bba182ff4ea10c5ee760af,PodSandboxId:48436a33fb598e082507fcb5070e4486133bcad969e81210561c4eca10560d33,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721418015590702250,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6qndw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b80cbbf-e1de-449d-affb-90c7e626e11a,},Annotations:map[string]string{io.kubernetes.container.hash: 27a8a211,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c65de2de8482e19415059ef3be3788d783af6af981d8a04d14179e40bb76334,PodSandboxId:986c786f16e6c5d41f4779317c3902b3533637fee4a239c794aeecd8ddb01a90,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721418015245306711,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gmljj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e3c23a75-7c98-4704-b159-465a46e71deb,},Annotations:map[string]string{io.kubernetes.container.hash: 34f880b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27c5137e81ce4a3a4314ee9abb74c9b80eee504b7940b5261240bff2ecf0ea94,PodSandboxId:48b5b8f58679240b5c96e27a79d0867e4a7a49f35b6780532d40e02a6684a7e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1721418014679272039,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xd6jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 699a18fd-c6e6-4c85-9799-0630bb7932b9,},Annotations:map[string]string{io.kubernetes.container.hash: b9885ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad1f5dc72ccb7b8f8d8357d5c9d23d139367b15fbb8a70a0e82874d09f9d1dd,PodSandboxId:9faabf77253d5b5b1da4a12659ca5c251c72eb04eef845991496e23e4bdeb812,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721417995599739090,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096db0e04661bf40752b9a4898f2bcf9,},Annotations:map[string]string{io.kubernetes.container.hash: fd630f66,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd8e28deb2f51e6c4ed6ca939ab870a348addc9d1173cef06f05da3c3df20a15,PodSandboxId:ebb805e3a4c5100256765687311fbafd8477d9c6303a3287f507489f281519c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721417995590555458,Labels:map[string]string{io.kub
ernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c50c783d6d5435770e83874c37ce2b6,},Annotations:map[string]string{io.kubernetes.container.hash: a8d50415,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec40058eeac298f291cf9203f510fbc71ccdacc8f4d31a5b3fc25f3c4a6d9b4d,PodSandboxId:089d5c0dbd2753b18e533862fb795f58cfb009aaf979c3250a5fe4bbc4b7a8e9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721417995594882175,Labels:map[string]string{io.kuber
netes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2593cfd7c8bd9f8e3f97dcfb50f20c2c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b342b0e80323559300909a4db3e718e4bd038e8b166c454b9a82aedb44323fbe,PodSandboxId:131d57cb0ded26d1dc359e5aa09ecf9de373c1a167ba4014ffe6f723d2a0c1cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721417995498101615,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1619e3139d20f5a1024703bbbd8ddea,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dab364e723e9421448ef1ff573d84fe92df839a75a1133f07bf2a84ebd7950e,PodSandboxId:92eaaaa01d03c5bddeee3a99c87fac476bad631e01eb88e12013113c82a3bbf5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721417684847346396,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c50c783d6d5435770e83874c37ce2b6,},Annotations:map[string]string{io.kubernetes.container.hash: a8d50415,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=03a1f902-a276-4568-8d94-80d5cfabe4e7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d1c7e821430f2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   828d1e03a8884       storage-provisioner
	ee742ac25345f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   48436a33fb598       coredns-7db6d8ff4d-6qndw
	2c65de2de8482       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   986c786f16e6c       coredns-7db6d8ff4d-gmljj
	27c5137e81ce4       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   9 minutes ago       Running             kube-proxy                0                   48b5b8f586792       kube-proxy-xd6jl
	7ad1f5dc72ccb       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   9faabf77253d5       etcd-default-k8s-diff-port-578533
	ec40058eeac29       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   9 minutes ago       Running             kube-scheduler            2                   089d5c0dbd275       kube-scheduler-default-k8s-diff-port-578533
	bd8e28deb2f51       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   9 minutes ago       Running             kube-apiserver            3                   ebb805e3a4c51       kube-apiserver-default-k8s-diff-port-578533
	b342b0e803235       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   9 minutes ago       Running             kube-controller-manager   3                   131d57cb0ded2       kube-controller-manager-default-k8s-diff-port-578533
	6dab364e723e9       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   14 minutes ago      Exited              kube-apiserver            2                   92eaaaa01d03c       kube-apiserver-default-k8s-diff-port-578533
	
	
	==> coredns [2c65de2de8482e19415059ef3be3788d783af6af981d8a04d14179e40bb76334] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [ee742ac25345f19173e819339a637dea94073ade83bba182ff4ea10c5ee760af] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-578533
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-578533
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221
	                    minikube.k8s.io/name=default-k8s-diff-port-578533
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T19_40_01_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 19:39:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-578533
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 19:49:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 19:45:26 +0000   Fri, 19 Jul 2024 19:39:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 19:45:26 +0000   Fri, 19 Jul 2024 19:39:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 19:45:26 +0000   Fri, 19 Jul 2024 19:39:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 19:45:26 +0000   Fri, 19 Jul 2024 19:39:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.104
	  Hostname:    default-k8s-diff-port-578533
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2511e8bbb67146248781b4996db1e01a
	  System UUID:                2511e8bb-b671-4624-8781-b4996db1e01a
	  Boot ID:                    3e32c8bc-1bd0-40f2-b12c-953fae7d9bc8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-6qndw                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m5s
	  kube-system                 coredns-7db6d8ff4d-gmljj                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m5s
	  kube-system                 etcd-default-k8s-diff-port-578533                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-apiserver-default-k8s-diff-port-578533             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-578533    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-proxy-xd6jl                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	  kube-system                 kube-scheduler-default-k8s-diff-port-578533             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 metrics-server-569cc877fc-kz2bz                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m4s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m4s   kube-proxy       
	  Normal  Starting                 9m19s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m19s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m19s  kubelet          Node default-k8s-diff-port-578533 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s  kubelet          Node default-k8s-diff-port-578533 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s  kubelet          Node default-k8s-diff-port-578533 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m6s   node-controller  Node default-k8s-diff-port-578533 event: Registered Node default-k8s-diff-port-578533 in Controller
	
	
	==> dmesg <==
	[  +0.050896] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036953] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jul19 19:34] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.812550] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.517250] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.098814] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.063427] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056504] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +0.172032] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.140914] systemd-fstab-generator[689]: Ignoring "noauto" option for root device
	[  +0.263527] systemd-fstab-generator[719]: Ignoring "noauto" option for root device
	[  +4.166834] systemd-fstab-generator[818]: Ignoring "noauto" option for root device
	[  +2.261200] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +0.058872] kauditd_printk_skb: 158 callbacks suppressed
	[ +14.036461] kauditd_printk_skb: 59 callbacks suppressed
	[Jul19 19:35] kauditd_printk_skb: 79 callbacks suppressed
	[Jul19 19:39] systemd-fstab-generator[3635]: Ignoring "noauto" option for root device
	[  +0.065647] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.991733] systemd-fstab-generator[3961]: Ignoring "noauto" option for root device
	[  +0.088852] kauditd_printk_skb: 55 callbacks suppressed
	[Jul19 19:40] systemd-fstab-generator[4157]: Ignoring "noauto" option for root device
	[  +0.094853] kauditd_printk_skb: 12 callbacks suppressed
	[Jul19 19:41] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [7ad1f5dc72ccb7b8f8d8357d5c9d23d139367b15fbb8a70a0e82874d09f9d1dd] <==
	{"level":"info","ts":"2024-07-19T19:39:55.9895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"50db06592c6308e8 switched to configuration voters=(5826257523000412392)"}
	{"level":"info","ts":"2024-07-19T19:39:55.989583Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"31eeb21fffaecb88","local-member-id":"50db06592c6308e8","added-peer-id":"50db06592c6308e8","added-peer-peer-urls":["https://192.168.72.104:2380"]}
	{"level":"info","ts":"2024-07-19T19:39:56.003449Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-19T19:39:56.003655Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"50db06592c6308e8","initial-advertise-peer-urls":["https://192.168.72.104:2380"],"listen-peer-urls":["https://192.168.72.104:2380"],"advertise-client-urls":["https://192.168.72.104:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.104:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-19T19:39:56.00369Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-19T19:39:56.003812Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.104:2380"}
	{"level":"info","ts":"2024-07-19T19:39:56.003833Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.104:2380"}
	{"level":"info","ts":"2024-07-19T19:39:56.340027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"50db06592c6308e8 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-19T19:39:56.34008Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"50db06592c6308e8 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-19T19:39:56.340119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"50db06592c6308e8 received MsgPreVoteResp from 50db06592c6308e8 at term 1"}
	{"level":"info","ts":"2024-07-19T19:39:56.340133Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"50db06592c6308e8 became candidate at term 2"}
	{"level":"info","ts":"2024-07-19T19:39:56.340138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"50db06592c6308e8 received MsgVoteResp from 50db06592c6308e8 at term 2"}
	{"level":"info","ts":"2024-07-19T19:39:56.340147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"50db06592c6308e8 became leader at term 2"}
	{"level":"info","ts":"2024-07-19T19:39:56.340154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 50db06592c6308e8 elected leader 50db06592c6308e8 at term 2"}
	{"level":"info","ts":"2024-07-19T19:39:56.343241Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"50db06592c6308e8","local-member-attributes":"{Name:default-k8s-diff-port-578533 ClientURLs:[https://192.168.72.104:2379]}","request-path":"/0/members/50db06592c6308e8/attributes","cluster-id":"31eeb21fffaecb88","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-19T19:39:56.343335Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T19:39:56.343813Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T19:39:56.347132Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T19:39:56.350694Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.104:2379"}
	{"level":"info","ts":"2024-07-19T19:39:56.360105Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"31eeb21fffaecb88","local-member-id":"50db06592c6308e8","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T19:39:56.360286Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T19:39:56.361999Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T19:39:56.366988Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T19:39:56.367043Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-19T19:39:56.367892Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:49:19 up 15 min,  0 users,  load average: 0.33, 0.22, 0.13
	Linux default-k8s-diff-port-578533 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6dab364e723e9421448ef1ff573d84fe92df839a75a1133f07bf2a84ebd7950e] <==
	W0719 19:39:50.945200       1 logging.go:59] [core] [Channel #86 SubChannel #87] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.108552       1 logging.go:59] [core] [Channel #83 SubChannel #84] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.242043       1 logging.go:59] [core] [Channel #13 SubChannel #14] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.254190       1 logging.go:59] [core] [Channel #95 SubChannel #96] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.256676       1 logging.go:59] [core] [Channel #89 SubChannel #90] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.268803       1 logging.go:59] [core] [Channel #59 SubChannel #60] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.274398       1 logging.go:59] [core] [Channel #17 SubChannel #18] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.325629       1 logging.go:59] [core] [Channel #101 SubChannel #102] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.354943       1 logging.go:59] [core] [Channel #56 SubChannel #57] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.379732       1 logging.go:59] [core] [Channel #140 SubChannel #141] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.442333       1 logging.go:59] [core] [Channel #119 SubChannel #120] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.475675       1 logging.go:59] [core] [Channel #137 SubChannel #138] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.536196       1 logging.go:59] [core] [Channel #77 SubChannel #78] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.567131       1 logging.go:59] [core] [Channel #32 SubChannel #33] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.610372       1 logging.go:59] [core] [Channel #44 SubChannel #45] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.664490       1 logging.go:59] [core] [Channel #47 SubChannel #48] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.693419       1 logging.go:59] [core] [Channel #80 SubChannel #81] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.771407       1 logging.go:59] [core] [Channel #161 SubChannel #162] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.785053       1 logging.go:59] [core] [Channel #104 SubChannel #105] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.814033       1 logging.go:59] [core] [Channel #110 SubChannel #111] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.856264       1 logging.go:59] [core] [Channel #155 SubChannel #156] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.921825       1 logging.go:59] [core] [Channel #20 SubChannel #21] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:52.138771       1 logging.go:59] [core] [Channel #143 SubChannel #144] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:52.203783       1 logging.go:59] [core] [Channel #8 SubChannel #9] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:52.366474       1 logging.go:59] [core] [Channel #65 SubChannel #66] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [bd8e28deb2f51e6c4ed6ca939ab870a348addc9d1173cef06f05da3c3df20a15] <==
	I0719 19:43:16.199520       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 19:44:57.986129       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 19:44:57.986490       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0719 19:44:58.986696       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 19:44:58.986847       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0719 19:44:58.986879       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 19:44:58.987010       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 19:44:58.987053       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0719 19:44:58.988023       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 19:45:58.987809       1 handler_proxy.go:93] no RequestInfo found in the context
	W0719 19:45:58.988106       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 19:45:58.988150       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0719 19:45:58.988177       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0719 19:45:58.988185       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0719 19:45:58.989863       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 19:47:58.989370       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 19:47:58.989480       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0719 19:47:58.989514       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 19:47:58.990583       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 19:47:58.990718       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0719 19:47:58.990749       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [b342b0e80323559300909a4db3e718e4bd038e8b166c454b9a82aedb44323fbe] <==
	I0719 19:43:43.904898       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:44:13.465638       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:44:13.912865       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:44:43.472320       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:44:43.925925       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:45:13.477114       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:45:13.936779       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:45:43.482208       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:45:43.946837       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0719 19:46:05.358326       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="392.726µs"
	E0719 19:46:13.489070       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:46:13.955328       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0719 19:46:16.359643       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="87.017µs"
	E0719 19:46:43.494639       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:46:43.962990       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:47:13.500525       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:47:13.973553       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:47:43.505809       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:47:43.984268       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:48:13.511676       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:48:13.997255       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:48:43.517127       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:48:44.006918       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:49:13.522259       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:49:14.013848       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [27c5137e81ce4a3a4314ee9abb74c9b80eee504b7940b5261240bff2ecf0ea94] <==
	I0719 19:40:14.949084       1 server_linux.go:69] "Using iptables proxy"
	I0719 19:40:14.988650       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.104"]
	I0719 19:40:15.090504       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 19:40:15.090559       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 19:40:15.090575       1 server_linux.go:165] "Using iptables Proxier"
	I0719 19:40:15.094100       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 19:40:15.094324       1 server.go:872] "Version info" version="v1.30.3"
	I0719 19:40:15.094352       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 19:40:15.098444       1 config.go:192] "Starting service config controller"
	I0719 19:40:15.098478       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 19:40:15.098504       1 config.go:101] "Starting endpoint slice config controller"
	I0719 19:40:15.098508       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 19:40:15.098924       1 config.go:319] "Starting node config controller"
	I0719 19:40:15.098996       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 19:40:15.200073       1 shared_informer.go:320] Caches are synced for node config
	I0719 19:40:15.200124       1 shared_informer.go:320] Caches are synced for service config
	I0719 19:40:15.200164       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ec40058eeac298f291cf9203f510fbc71ccdacc8f4d31a5b3fc25f3c4a6d9b4d] <==
	W0719 19:39:58.018845       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0719 19:39:58.018880       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0719 19:39:58.846578       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0719 19:39:58.846711       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0719 19:39:58.847366       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 19:39:58.847408       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0719 19:39:58.889235       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0719 19:39:58.889280       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0719 19:39:59.007432       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 19:39:59.007482       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 19:39:59.028369       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 19:39:59.028410       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 19:39:59.036392       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0719 19:39:59.036555       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0719 19:39:59.097637       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 19:39:59.097680       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 19:39:59.205194       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0719 19:39:59.205238       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0719 19:39:59.288866       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 19:39:59.288993       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0719 19:39:59.290013       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0719 19:39:59.290549       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0719 19:39:59.296918       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 19:39:59.297048       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0719 19:40:01.010486       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 19 19:47:00 default-k8s-diff-port-578533 kubelet[3968]: E0719 19:47:00.356368    3968 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 19:47:00 default-k8s-diff-port-578533 kubelet[3968]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 19:47:00 default-k8s-diff-port-578533 kubelet[3968]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 19:47:00 default-k8s-diff-port-578533 kubelet[3968]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 19:47:00 default-k8s-diff-port-578533 kubelet[3968]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 19:47:09 default-k8s-diff-port-578533 kubelet[3968]: E0719 19:47:09.342673    3968 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kz2bz" podUID="dc109926-e072-47e1-944c-a9614741fdaa"
	Jul 19 19:47:24 default-k8s-diff-port-578533 kubelet[3968]: E0719 19:47:24.343310    3968 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kz2bz" podUID="dc109926-e072-47e1-944c-a9614741fdaa"
	Jul 19 19:47:36 default-k8s-diff-port-578533 kubelet[3968]: E0719 19:47:36.344911    3968 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kz2bz" podUID="dc109926-e072-47e1-944c-a9614741fdaa"
	Jul 19 19:47:50 default-k8s-diff-port-578533 kubelet[3968]: E0719 19:47:50.342079    3968 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kz2bz" podUID="dc109926-e072-47e1-944c-a9614741fdaa"
	Jul 19 19:48:00 default-k8s-diff-port-578533 kubelet[3968]: E0719 19:48:00.356893    3968 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 19:48:00 default-k8s-diff-port-578533 kubelet[3968]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 19:48:00 default-k8s-diff-port-578533 kubelet[3968]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 19:48:00 default-k8s-diff-port-578533 kubelet[3968]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 19:48:00 default-k8s-diff-port-578533 kubelet[3968]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 19:48:01 default-k8s-diff-port-578533 kubelet[3968]: E0719 19:48:01.342295    3968 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kz2bz" podUID="dc109926-e072-47e1-944c-a9614741fdaa"
	Jul 19 19:48:15 default-k8s-diff-port-578533 kubelet[3968]: E0719 19:48:15.342414    3968 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kz2bz" podUID="dc109926-e072-47e1-944c-a9614741fdaa"
	Jul 19 19:48:27 default-k8s-diff-port-578533 kubelet[3968]: E0719 19:48:27.341844    3968 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kz2bz" podUID="dc109926-e072-47e1-944c-a9614741fdaa"
	Jul 19 19:48:42 default-k8s-diff-port-578533 kubelet[3968]: E0719 19:48:42.345499    3968 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kz2bz" podUID="dc109926-e072-47e1-944c-a9614741fdaa"
	Jul 19 19:48:54 default-k8s-diff-port-578533 kubelet[3968]: E0719 19:48:54.342537    3968 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kz2bz" podUID="dc109926-e072-47e1-944c-a9614741fdaa"
	Jul 19 19:49:00 default-k8s-diff-port-578533 kubelet[3968]: E0719 19:49:00.356713    3968 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 19:49:00 default-k8s-diff-port-578533 kubelet[3968]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 19:49:00 default-k8s-diff-port-578533 kubelet[3968]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 19:49:00 default-k8s-diff-port-578533 kubelet[3968]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 19:49:00 default-k8s-diff-port-578533 kubelet[3968]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 19:49:08 default-k8s-diff-port-578533 kubelet[3968]: E0719 19:49:08.344067    3968 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kz2bz" podUID="dc109926-e072-47e1-944c-a9614741fdaa"
	
	
	==> storage-provisioner [d1c7e821430f2c3e648dee20b159498d81e180c353242de20e2e6bd1f2a1c2c3] <==
	I0719 19:40:15.945666       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 19:40:15.961819       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 19:40:15.962208       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0719 19:40:15.978871       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0719 19:40:15.979075       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-578533_6a5f46a5-2c1c-4591-b583-5f35d471401b!
	I0719 19:40:15.979671       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"03b44dc7-0ac0-41e3-b227-e533b537f886", APIVersion:"v1", ResourceVersion:"407", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-578533_6a5f46a5-2c1c-4591-b583-5f35d471401b became leader
	I0719 19:40:16.080264       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-578533_6a5f46a5-2c1c-4591-b583-5f35d471401b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-578533 -n default-k8s-diff-port-578533
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-578533 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-kz2bz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-578533 describe pod metrics-server-569cc877fc-kz2bz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-578533 describe pod metrics-server-569cc877fc-kz2bz: exit status 1 (64.778596ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-kz2bz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-578533 describe pod metrics-server-569cc877fc-kz2bz: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0719 19:41:06.570492   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/functional-788436/client.crt: no such file or directory
E0719 19:42:32.426118   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-971041 -n no-preload-971041
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-19 19:49:42.229443978 +0000 UTC m=+5756.493206959
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-971041 -n no-preload-971041
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-971041 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-971041 logs -n 25: (2.072927339s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p kubernetes-upgrade-821162                           | kubernetes-upgrade-821162    | jenkins | v1.33.1 | 19 Jul 24 19:25 UTC | 19 Jul 24 19:25 UTC |
	| start   | -p embed-certs-889918                                  | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:25 UTC | 19 Jul 24 19:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| pause   | -p pause-666146                                        | pause-666146                 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| unpause | -p pause-666146                                        | pause-666146                 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| pause   | -p pause-666146                                        | pause-666146                 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-666146                                        | pause-666146                 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-666146                                        | pause-666146                 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	| delete  | -p                                                     | disable-driver-mounts-994300 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | disable-driver-mounts-994300                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:27 UTC |
	|         | default-k8s-diff-port-578533                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-971041             | no-preload-971041            | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-971041                                   | no-preload-971041            | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-578533  | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:27 UTC | 19 Jul 24 19:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:27 UTC |                     |
	|         | default-k8s-diff-port-578533                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-889918            | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:27 UTC | 19 Jul 24 19:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-889918                                  | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-971041                  | no-preload-971041            | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-971041 --memory=2200                     | no-preload-971041            | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC | 19 Jul 24 19:40 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-570369        | old-k8s-version-570369       | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-578533       | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-889918                 | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:30 UTC | 19 Jul 24 19:40 UTC |
	|         | default-k8s-diff-port-578533                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-889918                                  | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:30 UTC | 19 Jul 24 19:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-570369                              | old-k8s-version-570369       | jenkins | v1.33.1 | 19 Jul 24 19:30 UTC | 19 Jul 24 19:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-570369             | old-k8s-version-570369       | jenkins | v1.33.1 | 19 Jul 24 19:31 UTC | 19 Jul 24 19:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-570369                              | old-k8s-version-570369       | jenkins | v1.33.1 | 19 Jul 24 19:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 19:31:02
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 19:31:02.001058   68995 out.go:291] Setting OutFile to fd 1 ...
	I0719 19:31:02.001478   68995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:31:02.001493   68995 out.go:304] Setting ErrFile to fd 2...
	I0719 19:31:02.001500   68995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:31:02.001955   68995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 19:31:02.002951   68995 out.go:298] Setting JSON to false
	I0719 19:31:02.003871   68995 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8005,"bootTime":1721409457,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 19:31:02.003932   68995 start.go:139] virtualization: kvm guest
	I0719 19:31:02.005683   68995 out.go:177] * [old-k8s-version-570369] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 19:31:02.007426   68995 notify.go:220] Checking for updates...
	I0719 19:31:02.007440   68995 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 19:31:02.008887   68995 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 19:31:02.010101   68995 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:31:02.011382   68995 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 19:31:02.012573   68995 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 19:31:02.013796   68995 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 19:31:02.015353   68995 config.go:182] Loaded profile config "old-k8s-version-570369": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0719 19:31:02.015711   68995 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:31:02.015750   68995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:31:02.030334   68995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42673
	I0719 19:31:02.030703   68995 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:31:02.031167   68995 main.go:141] libmachine: Using API Version  1
	I0719 19:31:02.031204   68995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:31:02.031520   68995 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:31:02.031688   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:31:02.033460   68995 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0719 19:31:02.034684   68995 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 19:31:02.034983   68995 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:31:02.035018   68995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:31:02.049101   68995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44905
	I0719 19:31:02.049495   68995 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:31:02.049992   68995 main.go:141] libmachine: Using API Version  1
	I0719 19:31:02.050006   68995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:31:02.050295   68995 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:31:02.050462   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:31:02.085393   68995 out.go:177] * Using the kvm2 driver based on existing profile
	I0719 19:31:02.086642   68995 start.go:297] selected driver: kvm2
	I0719 19:31:02.086658   68995 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-570369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:31:02.086766   68995 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 19:31:02.087430   68995 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 19:31:02.087497   68995 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19307-14841/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 19:31:02.102357   68995 install.go:137] /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0719 19:31:02.102731   68995 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 19:31:02.102797   68995 cni.go:84] Creating CNI manager for ""
	I0719 19:31:02.102814   68995 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:31:02.102857   68995 start.go:340] cluster config:
	{Name:old-k8s-version-570369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570369 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:31:02.102975   68995 iso.go:125] acquiring lock: {Name:mka126a3710ab8a115eb2a3299b735524430e5f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 19:31:02.104839   68995 out.go:177] * Starting "old-k8s-version-570369" primary control-plane node in "old-k8s-version-570369" cluster
	I0719 19:30:58.496088   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:02.106080   68995 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 19:31:02.106120   68995 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0719 19:31:02.106127   68995 cache.go:56] Caching tarball of preloaded images
	I0719 19:31:02.106200   68995 preload.go:172] Found /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 19:31:02.106217   68995 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0719 19:31:02.106343   68995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/config.json ...
	I0719 19:31:02.106543   68995 start.go:360] acquireMachinesLock for old-k8s-version-570369: {Name:mk45aeee801587237bb965fa260501e9fac6f233 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 19:31:04.576109   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:07.648198   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:13.728139   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:16.800162   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:22.880170   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:25.952158   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:32.032094   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:35.104127   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:41.184155   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:44.256157   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:50.336135   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:53.408126   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:59.488236   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:02.560149   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:08.640107   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:11.712145   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:17.792126   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:20.864127   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:26.944130   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:30.016118   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:36.096094   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:39.168160   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:45.248117   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:48.320210   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:54.400118   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:57.472159   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:03.552119   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:06.624189   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:12.704169   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:15.776085   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:21.856121   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:24.928169   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:31.008143   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:34.080169   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:40.160126   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:43.232131   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:49.312133   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:52.384155   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:55.388766   68536 start.go:364] duration metric: took 3m51.88250847s to acquireMachinesLock for "default-k8s-diff-port-578533"
	I0719 19:33:55.388818   68536 start.go:96] Skipping create...Using existing machine configuration
	I0719 19:33:55.388827   68536 fix.go:54] fixHost starting: 
	I0719 19:33:55.389286   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:33:55.389320   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:33:55.405253   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44877
	I0719 19:33:55.405681   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:33:55.406171   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:33:55.406199   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:33:55.406526   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:33:55.406725   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:33:55.406877   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetState
	I0719 19:33:55.408620   68536 fix.go:112] recreateIfNeeded on default-k8s-diff-port-578533: state=Stopped err=<nil>
	I0719 19:33:55.408664   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	W0719 19:33:55.408859   68536 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 19:33:55.410653   68536 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-578533" ...
	I0719 19:33:55.386190   68019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:33:55.386229   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetMachineName
	I0719 19:33:55.386570   68019 buildroot.go:166] provisioning hostname "no-preload-971041"
	I0719 19:33:55.386604   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetMachineName
	I0719 19:33:55.386798   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:33:55.388625   68019 machine.go:97] duration metric: took 4m37.402512771s to provisionDockerMachine
	I0719 19:33:55.388679   68019 fix.go:56] duration metric: took 4m37.424232342s for fixHost
	I0719 19:33:55.388689   68019 start.go:83] releasing machines lock for "no-preload-971041", held for 4m37.424257446s
	W0719 19:33:55.388716   68019 start.go:714] error starting host: provision: host is not running
	W0719 19:33:55.388829   68019 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0719 19:33:55.388840   68019 start.go:729] Will try again in 5 seconds ...
	I0719 19:33:55.412077   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Start
	I0719 19:33:55.412257   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Ensuring networks are active...
	I0719 19:33:55.413065   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Ensuring network default is active
	I0719 19:33:55.413504   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Ensuring network mk-default-k8s-diff-port-578533 is active
	I0719 19:33:55.413931   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Getting domain xml...
	I0719 19:33:55.414619   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Creating domain...
	I0719 19:33:56.637676   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting to get IP...
	I0719 19:33:56.638545   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:56.638912   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:56.638971   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:56.638901   69604 retry.go:31] will retry after 210.865266ms: waiting for machine to come up
	I0719 19:33:56.851496   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:56.851959   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:56.851986   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:56.851920   69604 retry.go:31] will retry after 336.006729ms: waiting for machine to come up
	I0719 19:33:57.189571   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:57.189916   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:57.189945   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:57.189885   69604 retry.go:31] will retry after 325.384423ms: waiting for machine to come up
	I0719 19:33:57.516504   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:57.517043   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:57.517072   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:57.516989   69604 retry.go:31] will retry after 438.127108ms: waiting for machine to come up
	I0719 19:33:57.956641   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:57.957091   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:57.957110   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:57.957066   69604 retry.go:31] will retry after 656.518154ms: waiting for machine to come up
	I0719 19:34:00.391238   68019 start.go:360] acquireMachinesLock for no-preload-971041: {Name:mk45aeee801587237bb965fa260501e9fac6f233 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 19:33:58.614988   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:58.615352   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:58.615398   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:58.615292   69604 retry.go:31] will retry after 854.967551ms: waiting for machine to come up
	I0719 19:33:59.471470   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:59.471922   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:59.471950   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:59.471872   69604 retry.go:31] will retry after 1.032891444s: waiting for machine to come up
	I0719 19:34:00.506554   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:00.507020   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:34:00.507051   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:34:00.506976   69604 retry.go:31] will retry after 1.30559154s: waiting for machine to come up
	I0719 19:34:01.814516   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:01.814877   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:34:01.814903   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:34:01.814817   69604 retry.go:31] will retry after 1.152581751s: waiting for machine to come up
	I0719 19:34:02.968913   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:02.969375   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:34:02.969399   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:34:02.969317   69604 retry.go:31] will retry after 1.415187183s: waiting for machine to come up
	I0719 19:34:04.386365   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:04.386906   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:34:04.386937   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:34:04.386839   69604 retry.go:31] will retry after 2.854689482s: waiting for machine to come up
	I0719 19:34:07.243500   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:07.244001   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:34:07.244031   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:34:07.243865   69604 retry.go:31] will retry after 2.536982272s: waiting for machine to come up
	I0719 19:34:09.783749   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:09.784224   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:34:09.784256   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:34:09.784163   69604 retry.go:31] will retry after 3.046898328s: waiting for machine to come up
	I0719 19:34:12.833907   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.834440   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Found IP for machine: 192.168.72.104
	I0719 19:34:12.834468   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Reserving static IP address...
	I0719 19:34:12.834484   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has current primary IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.835035   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-578533", mac: "52:54:00:85:b3:02", ip: "192.168.72.104"} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:12.835064   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | skip adding static IP to network mk-default-k8s-diff-port-578533 - found existing host DHCP lease matching {name: "default-k8s-diff-port-578533", mac: "52:54:00:85:b3:02", ip: "192.168.72.104"}
	I0719 19:34:12.835078   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Reserved static IP address: 192.168.72.104
	I0719 19:34:12.835094   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for SSH to be available...
	I0719 19:34:12.835109   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Getting to WaitForSSH function...
	I0719 19:34:12.837426   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.837740   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:12.837766   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.837864   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Using SSH client type: external
	I0719 19:34:12.837885   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Using SSH private key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa (-rw-------)
	I0719 19:34:12.837916   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 19:34:12.837930   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | About to run SSH command:
	I0719 19:34:12.837946   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | exit 0
	I0719 19:34:12.959728   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | SSH cmd err, output: <nil>: 
	I0719 19:34:12.960118   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetConfigRaw
	I0719 19:34:12.960737   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetIP
	I0719 19:34:12.963250   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.963645   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:12.963675   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.963868   68536 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/config.json ...
	I0719 19:34:12.964074   68536 machine.go:94] provisionDockerMachine start ...
	I0719 19:34:12.964092   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:34:12.964326   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:12.966667   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.966982   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:12.967017   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.967148   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:12.967318   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:12.967567   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:12.967727   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:12.967907   68536 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:12.968155   68536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0719 19:34:12.968169   68536 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 19:34:13.063641   68536 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 19:34:13.063685   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetMachineName
	I0719 19:34:13.064015   68536 buildroot.go:166] provisioning hostname "default-k8s-diff-port-578533"
	I0719 19:34:13.064045   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetMachineName
	I0719 19:34:13.064242   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.067158   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.067658   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.067783   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.067785   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.067994   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.068150   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.068309   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.068438   68536 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:13.068638   68536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0719 19:34:13.068653   68536 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-578533 && echo "default-k8s-diff-port-578533" | sudo tee /etc/hostname
	I0719 19:34:13.182279   68536 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-578533
	
	I0719 19:34:13.182304   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.185081   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.185417   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.185440   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.185585   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.185761   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.185925   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.186102   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.186237   68536 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:13.186400   68536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0719 19:34:13.186415   68536 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-578533' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-578533/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-578533' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 19:34:13.292203   68536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:34:13.292246   68536 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 19:34:13.292276   68536 buildroot.go:174] setting up certificates
	I0719 19:34:13.292285   68536 provision.go:84] configureAuth start
	I0719 19:34:13.292315   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetMachineName
	I0719 19:34:13.292630   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetIP
	I0719 19:34:13.295150   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.295466   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.295494   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.295628   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.297608   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.297896   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.297924   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.298039   68536 provision.go:143] copyHostCerts
	I0719 19:34:13.298129   68536 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 19:34:13.298148   68536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 19:34:13.298233   68536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 19:34:13.298352   68536 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 19:34:13.298364   68536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 19:34:13.298405   68536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 19:34:13.298491   68536 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 19:34:13.298503   68536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 19:34:13.298530   68536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 19:34:13.298587   68536 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-578533 san=[127.0.0.1 192.168.72.104 default-k8s-diff-port-578533 localhost minikube]
	I0719 19:34:13.367868   68536 provision.go:177] copyRemoteCerts
	I0719 19:34:13.367923   68536 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 19:34:13.367947   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.370508   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.370761   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.370783   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.370975   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.371154   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.371331   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.371456   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:34:13.984609   68617 start.go:364] duration metric: took 4m7.413784663s to acquireMachinesLock for "embed-certs-889918"
	I0719 19:34:13.984726   68617 start.go:96] Skipping create...Using existing machine configuration
	I0719 19:34:13.984746   68617 fix.go:54] fixHost starting: 
	I0719 19:34:13.985263   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:13.985332   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:14.001938   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42041
	I0719 19:34:14.002372   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:14.002864   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:14.002890   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:14.003223   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:14.003400   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:14.003545   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetState
	I0719 19:34:14.005102   68617 fix.go:112] recreateIfNeeded on embed-certs-889918: state=Stopped err=<nil>
	I0719 19:34:14.005141   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	W0719 19:34:14.005320   68617 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 19:34:14.007389   68617 out.go:177] * Restarting existing kvm2 VM for "embed-certs-889918" ...
	I0719 19:34:13.449436   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 19:34:13.472085   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0719 19:34:13.493354   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 19:34:13.515145   68536 provision.go:87] duration metric: took 222.847636ms to configureAuth
	I0719 19:34:13.515179   68536 buildroot.go:189] setting minikube options for container-runtime
	I0719 19:34:13.515350   68536 config.go:182] Loaded profile config "default-k8s-diff-port-578533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:34:13.515430   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.517974   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.518417   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.518449   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.518608   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.518818   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.519077   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.519266   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.519452   68536 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:13.519605   68536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0719 19:34:13.519619   68536 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 19:34:13.764107   68536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 19:34:13.764143   68536 machine.go:97] duration metric: took 800.055026ms to provisionDockerMachine
	I0719 19:34:13.764155   68536 start.go:293] postStartSetup for "default-k8s-diff-port-578533" (driver="kvm2")
	I0719 19:34:13.764165   68536 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 19:34:13.764182   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:34:13.764505   68536 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 19:34:13.764535   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.767239   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.767669   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.767692   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.767870   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.768050   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.768210   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.768356   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:34:13.846203   68536 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 19:34:13.850376   68536 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 19:34:13.850403   68536 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 19:34:13.850473   68536 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 19:34:13.850562   68536 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 19:34:13.850672   68536 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 19:34:13.859405   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:34:13.880565   68536 start.go:296] duration metric: took 116.397021ms for postStartSetup
	I0719 19:34:13.880605   68536 fix.go:56] duration metric: took 18.491777449s for fixHost
	I0719 19:34:13.880628   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.883335   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.883686   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.883716   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.883906   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.884071   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.884245   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.884390   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.884526   68536 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:13.884730   68536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0719 19:34:13.884743   68536 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 19:34:13.984426   68536 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721417653.961477719
	
	I0719 19:34:13.984454   68536 fix.go:216] guest clock: 1721417653.961477719
	I0719 19:34:13.984464   68536 fix.go:229] Guest: 2024-07-19 19:34:13.961477719 +0000 UTC Remote: 2024-07-19 19:34:13.88060979 +0000 UTC m=+250.513369231 (delta=80.867929ms)
	I0719 19:34:13.984498   68536 fix.go:200] guest clock delta is within tolerance: 80.867929ms
	I0719 19:34:13.984503   68536 start.go:83] releasing machines lock for "default-k8s-diff-port-578533", held for 18.59570944s
	I0719 19:34:13.984529   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:34:13.984822   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetIP
	I0719 19:34:13.987630   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.988146   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.988165   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.988322   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:34:13.988760   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:34:13.988961   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:34:13.989046   68536 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 19:34:13.989094   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.989197   68536 ssh_runner.go:195] Run: cat /version.json
	I0719 19:34:13.989218   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.991820   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.992135   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.992285   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.992329   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.992419   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.992558   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.992584   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.992608   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.992695   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.992798   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.992815   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.992957   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:34:13.992978   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.993119   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:34:14.102452   68536 ssh_runner.go:195] Run: systemctl --version
	I0719 19:34:14.108454   68536 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 19:34:14.256762   68536 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 19:34:14.263615   68536 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 19:34:14.263685   68536 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 19:34:14.280375   68536 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 19:34:14.280396   68536 start.go:495] detecting cgroup driver to use...
	I0719 19:34:14.280465   68536 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 19:34:14.297524   68536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 19:34:14.312273   68536 docker.go:217] disabling cri-docker service (if available) ...
	I0719 19:34:14.312334   68536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 19:34:14.326334   68536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 19:34:14.341338   68536 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 19:34:14.465631   68536 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 19:34:14.603249   68536 docker.go:233] disabling docker service ...
	I0719 19:34:14.603333   68536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 19:34:14.617420   68536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 19:34:14.630612   68536 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 19:34:14.776325   68536 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 19:34:14.890106   68536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 19:34:14.904436   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 19:34:14.921845   68536 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 19:34:14.921921   68536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:14.932155   68536 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 19:34:14.932220   68536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:14.942999   68536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:14.955449   68536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:14.966646   68536 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 19:34:14.977147   68536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:14.988224   68536 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:15.005941   68536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:15.018341   68536 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 19:34:15.028418   68536 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 19:34:15.028502   68536 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 19:34:15.040704   68536 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 19:34:15.050734   68536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:34:15.159960   68536 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 19:34:15.323020   68536 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 19:34:15.323095   68536 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 19:34:15.327549   68536 start.go:563] Will wait 60s for crictl version
	I0719 19:34:15.327616   68536 ssh_runner.go:195] Run: which crictl
	I0719 19:34:15.330907   68536 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 19:34:15.369980   68536 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 19:34:15.370062   68536 ssh_runner.go:195] Run: crio --version
	I0719 19:34:15.399065   68536 ssh_runner.go:195] Run: crio --version
	I0719 19:34:15.426462   68536 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 19:34:14.008924   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Start
	I0719 19:34:14.009109   68617 main.go:141] libmachine: (embed-certs-889918) Ensuring networks are active...
	I0719 19:34:14.009827   68617 main.go:141] libmachine: (embed-certs-889918) Ensuring network default is active
	I0719 19:34:14.010200   68617 main.go:141] libmachine: (embed-certs-889918) Ensuring network mk-embed-certs-889918 is active
	I0719 19:34:14.010575   68617 main.go:141] libmachine: (embed-certs-889918) Getting domain xml...
	I0719 19:34:14.011168   68617 main.go:141] libmachine: (embed-certs-889918) Creating domain...
	I0719 19:34:15.280485   68617 main.go:141] libmachine: (embed-certs-889918) Waiting to get IP...
	I0719 19:34:15.281463   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:15.281945   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:15.282019   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:15.281911   69727 retry.go:31] will retry after 309.614436ms: waiting for machine to come up
	I0719 19:34:15.593592   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:15.594042   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:15.594082   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:15.593990   69727 retry.go:31] will retry after 302.723172ms: waiting for machine to come up
	I0719 19:34:15.898674   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:15.899173   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:15.899203   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:15.899125   69727 retry.go:31] will retry after 419.891108ms: waiting for machine to come up
	I0719 19:34:16.320907   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:16.321399   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:16.321444   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:16.321365   69727 retry.go:31] will retry after 528.113112ms: waiting for machine to come up
	I0719 19:34:15.427710   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetIP
	I0719 19:34:15.430887   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:15.431461   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:15.431493   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:15.431702   68536 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0719 19:34:15.435670   68536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:34:15.447909   68536 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-578533 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-578533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.104 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 19:34:15.448033   68536 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 19:34:15.448080   68536 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:34:15.483687   68536 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0719 19:34:15.483774   68536 ssh_runner.go:195] Run: which lz4
	I0719 19:34:15.487578   68536 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 19:34:15.491770   68536 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 19:34:15.491799   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0719 19:34:16.808717   68536 crio.go:462] duration metric: took 1.321185682s to copy over tarball
	I0719 19:34:16.808781   68536 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 19:34:16.850971   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:16.851465   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:16.851497   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:16.851413   69727 retry.go:31] will retry after 734.178646ms: waiting for machine to come up
	I0719 19:34:17.587364   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:17.587763   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:17.587781   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:17.587722   69727 retry.go:31] will retry after 818.269359ms: waiting for machine to come up
	I0719 19:34:18.407178   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:18.407788   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:18.407815   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:18.407741   69727 retry.go:31] will retry after 1.018825067s: waiting for machine to come up
	I0719 19:34:19.427765   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:19.428247   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:19.428277   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:19.428196   69727 retry.go:31] will retry after 1.005679734s: waiting for machine to come up
	I0719 19:34:20.435288   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:20.435661   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:20.435693   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:20.435611   69727 retry.go:31] will retry after 1.648093342s: waiting for machine to come up
	I0719 19:34:19.006661   68536 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.197857661s)
	I0719 19:34:19.006696   68536 crio.go:469] duration metric: took 2.197951458s to extract the tarball
	I0719 19:34:19.006707   68536 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 19:34:19.043352   68536 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:34:19.083293   68536 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 19:34:19.083316   68536 cache_images.go:84] Images are preloaded, skipping loading
	I0719 19:34:19.083333   68536 kubeadm.go:934] updating node { 192.168.72.104 8444 v1.30.3 crio true true} ...
	I0719 19:34:19.083489   68536 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-578533 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-578533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 19:34:19.083593   68536 ssh_runner.go:195] Run: crio config
	I0719 19:34:19.129638   68536 cni.go:84] Creating CNI manager for ""
	I0719 19:34:19.129667   68536 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:34:19.129697   68536 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 19:34:19.129730   68536 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.104 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-578533 NodeName:default-k8s-diff-port-578533 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 19:34:19.129908   68536 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.104
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-578533"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 19:34:19.129996   68536 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 19:34:19.139345   68536 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 19:34:19.139423   68536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 19:34:19.147942   68536 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0719 19:34:19.163170   68536 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 19:34:19.179000   68536 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0719 19:34:19.195196   68536 ssh_runner.go:195] Run: grep 192.168.72.104	control-plane.minikube.internal$ /etc/hosts
	I0719 19:34:19.198941   68536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:34:19.210202   68536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:34:19.326530   68536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:34:19.343318   68536 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533 for IP: 192.168.72.104
	I0719 19:34:19.343352   68536 certs.go:194] generating shared ca certs ...
	I0719 19:34:19.343371   68536 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:34:19.343537   68536 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 19:34:19.343600   68536 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 19:34:19.343614   68536 certs.go:256] generating profile certs ...
	I0719 19:34:19.343739   68536 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/client.key
	I0719 19:34:19.343808   68536 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/apiserver.key.0a2b4bda
	I0719 19:34:19.343876   68536 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/proxy-client.key
	I0719 19:34:19.344051   68536 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 19:34:19.344112   68536 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 19:34:19.344126   68536 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 19:34:19.344155   68536 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 19:34:19.344183   68536 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 19:34:19.344211   68536 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 19:34:19.344270   68536 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:34:19.345114   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 19:34:19.382756   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 19:34:19.414239   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 19:34:19.448517   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 19:34:19.491082   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0719 19:34:19.514344   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 19:34:19.546679   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 19:34:19.569407   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 19:34:19.591638   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 19:34:19.613336   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 19:34:19.635118   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 19:34:19.656675   68536 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 19:34:19.672292   68536 ssh_runner.go:195] Run: openssl version
	I0719 19:34:19.678232   68536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 19:34:19.688472   68536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 19:34:19.692656   68536 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 19:34:19.692714   68536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 19:34:19.698240   68536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 19:34:19.707912   68536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 19:34:19.718120   68536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:34:19.722206   68536 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:34:19.722257   68536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:34:19.727641   68536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 19:34:19.737694   68536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 19:34:19.747354   68536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 19:34:19.751343   68536 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 19:34:19.751389   68536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 19:34:19.756539   68536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 19:34:19.766256   68536 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 19:34:19.770309   68536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 19:34:19.775957   68536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 19:34:19.781384   68536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 19:34:19.786780   68536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 19:34:19.792104   68536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 19:34:19.797536   68536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 19:34:19.802918   68536 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-578533 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-578533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.104 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:34:19.803026   68536 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 19:34:19.803100   68536 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:34:19.837950   68536 cri.go:89] found id: ""
	I0719 19:34:19.838029   68536 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 19:34:19.847639   68536 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 19:34:19.847667   68536 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 19:34:19.847718   68536 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 19:34:19.856631   68536 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 19:34:19.857641   68536 kubeconfig.go:125] found "default-k8s-diff-port-578533" server: "https://192.168.72.104:8444"
	I0719 19:34:19.859821   68536 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 19:34:19.868724   68536 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.104
	I0719 19:34:19.868764   68536 kubeadm.go:1160] stopping kube-system containers ...
	I0719 19:34:19.868778   68536 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 19:34:19.868844   68536 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:34:19.907378   68536 cri.go:89] found id: ""
	I0719 19:34:19.907445   68536 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 19:34:19.923744   68536 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:34:19.933425   68536 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:34:19.933444   68536 kubeadm.go:157] found existing configuration files:
	
	I0719 19:34:19.933508   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0719 19:34:19.942406   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:34:19.942458   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:34:19.951275   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0719 19:34:19.960233   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:34:19.960296   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:34:19.969289   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0719 19:34:19.977401   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:34:19.977458   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:34:19.986298   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0719 19:34:19.994572   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:34:19.994640   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:34:20.003270   68536 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:34:20.012009   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:20.120255   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:21.398914   68536 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.278623256s)
	I0719 19:34:21.398951   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:21.594827   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:21.650132   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:21.717440   68536 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:34:21.717566   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:22.218530   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:22.718618   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:23.218370   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:22.084801   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:22.085201   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:22.085230   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:22.085166   69727 retry.go:31] will retry after 1.686559667s: waiting for machine to come up
	I0719 19:34:23.773486   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:23.774013   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:23.774039   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:23.773982   69727 retry.go:31] will retry after 2.360164076s: waiting for machine to come up
	I0719 19:34:26.136972   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:26.137506   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:26.137528   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:26.137451   69727 retry.go:31] will retry after 2.922244999s: waiting for machine to come up
	I0719 19:34:23.717755   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:24.217721   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:24.232459   68536 api_server.go:72] duration metric: took 2.515020251s to wait for apiserver process to appear ...
	I0719 19:34:24.232486   68536 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:34:24.232513   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:29.061452   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:29.062023   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:29.062056   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:29.061975   69727 retry.go:31] will retry after 3.928406856s: waiting for machine to come up
	I0719 19:34:29.232943   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:34:29.233002   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:34.468635   68995 start.go:364] duration metric: took 3m32.362055744s to acquireMachinesLock for "old-k8s-version-570369"
	I0719 19:34:34.468688   68995 start.go:96] Skipping create...Using existing machine configuration
	I0719 19:34:34.468696   68995 fix.go:54] fixHost starting: 
	I0719 19:34:34.469206   68995 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:34.469246   68995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:34.488843   68995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36477
	I0719 19:34:34.489238   68995 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:34.489682   68995 main.go:141] libmachine: Using API Version  1
	I0719 19:34:34.489708   68995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:34.490007   68995 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:34.490191   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:34.490327   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetState
	I0719 19:34:34.491830   68995 fix.go:112] recreateIfNeeded on old-k8s-version-570369: state=Stopped err=<nil>
	I0719 19:34:34.491875   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	W0719 19:34:34.492029   68995 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 19:34:34.493991   68995 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-570369" ...
	I0719 19:34:32.995044   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:32.995539   68617 main.go:141] libmachine: (embed-certs-889918) Found IP for machine: 192.168.61.107
	I0719 19:34:32.995571   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has current primary IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:32.995583   68617 main.go:141] libmachine: (embed-certs-889918) Reserving static IP address...
	I0719 19:34:32.996234   68617 main.go:141] libmachine: (embed-certs-889918) Reserved static IP address: 192.168.61.107
	I0719 19:34:32.996289   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "embed-certs-889918", mac: "52:54:00:5d:44:ce", ip: "192.168.61.107"} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:32.996409   68617 main.go:141] libmachine: (embed-certs-889918) Waiting for SSH to be available...
	I0719 19:34:32.996438   68617 main.go:141] libmachine: (embed-certs-889918) DBG | skip adding static IP to network mk-embed-certs-889918 - found existing host DHCP lease matching {name: "embed-certs-889918", mac: "52:54:00:5d:44:ce", ip: "192.168.61.107"}
	I0719 19:34:32.996476   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Getting to WaitForSSH function...
	I0719 19:34:32.998802   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:32.999144   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:32.999166   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:32.999286   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Using SSH client type: external
	I0719 19:34:32.999310   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Using SSH private key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa (-rw-------)
	I0719 19:34:32.999352   68617 main.go:141] libmachine: (embed-certs-889918) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 19:34:32.999371   68617 main.go:141] libmachine: (embed-certs-889918) DBG | About to run SSH command:
	I0719 19:34:32.999404   68617 main.go:141] libmachine: (embed-certs-889918) DBG | exit 0
	I0719 19:34:33.128231   68617 main.go:141] libmachine: (embed-certs-889918) DBG | SSH cmd err, output: <nil>: 
	I0719 19:34:33.128662   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetConfigRaw
	I0719 19:34:33.129434   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetIP
	I0719 19:34:33.132282   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.132578   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.132613   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.132825   68617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/config.json ...
	I0719 19:34:33.133051   68617 machine.go:94] provisionDockerMachine start ...
	I0719 19:34:33.133071   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:33.133272   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:33.135294   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.135617   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.135645   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.135727   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:33.135898   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.136064   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.136208   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:33.136358   68617 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:33.136534   68617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0719 19:34:33.136545   68617 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 19:34:33.247702   68617 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 19:34:33.247736   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetMachineName
	I0719 19:34:33.248042   68617 buildroot.go:166] provisioning hostname "embed-certs-889918"
	I0719 19:34:33.248072   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetMachineName
	I0719 19:34:33.248288   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:33.251010   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.251506   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.251536   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.251739   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:33.251936   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.252136   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.252329   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:33.252586   68617 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:33.252822   68617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0719 19:34:33.252848   68617 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-889918 && echo "embed-certs-889918" | sudo tee /etc/hostname
	I0719 19:34:33.379987   68617 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-889918
	
	I0719 19:34:33.380013   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:33.382728   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.383037   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.383059   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.383212   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:33.383482   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.383662   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.383805   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:33.384004   68617 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:33.384193   68617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0719 19:34:33.384211   68617 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-889918' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-889918/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-889918' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 19:34:33.503798   68617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:34:33.503827   68617 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 19:34:33.503870   68617 buildroot.go:174] setting up certificates
	I0719 19:34:33.503884   68617 provision.go:84] configureAuth start
	I0719 19:34:33.503896   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetMachineName
	I0719 19:34:33.504125   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetIP
	I0719 19:34:33.506371   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.506777   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.506809   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.506914   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:33.509368   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.509751   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.509783   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.509892   68617 provision.go:143] copyHostCerts
	I0719 19:34:33.509957   68617 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 19:34:33.509967   68617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 19:34:33.510035   68617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 19:34:33.510150   68617 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 19:34:33.510163   68617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 19:34:33.510194   68617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 19:34:33.510269   68617 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 19:34:33.510280   68617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 19:34:33.510306   68617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 19:34:33.510369   68617 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.embed-certs-889918 san=[127.0.0.1 192.168.61.107 embed-certs-889918 localhost minikube]
	I0719 19:34:33.799515   68617 provision.go:177] copyRemoteCerts
	I0719 19:34:33.799571   68617 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 19:34:33.799595   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:33.802495   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.802789   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.802817   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.803020   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:33.803245   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.803453   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:33.803618   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:33.889555   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 19:34:33.912176   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0719 19:34:33.934472   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 19:34:33.956877   68617 provision.go:87] duration metric: took 452.977402ms to configureAuth
	I0719 19:34:33.956906   68617 buildroot.go:189] setting minikube options for container-runtime
	I0719 19:34:33.957113   68617 config.go:182] Loaded profile config "embed-certs-889918": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:34:33.957190   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:33.960120   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.960551   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.960583   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.960761   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:33.960948   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.961094   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.961221   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:33.961368   68617 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:33.961587   68617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0719 19:34:33.961608   68617 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 19:34:34.222464   68617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 19:34:34.222508   68617 machine.go:97] duration metric: took 1.089428018s to provisionDockerMachine
	I0719 19:34:34.222519   68617 start.go:293] postStartSetup for "embed-certs-889918" (driver="kvm2")
	I0719 19:34:34.222530   68617 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 19:34:34.222544   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:34.222899   68617 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 19:34:34.222924   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:34.225636   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.226063   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:34.226093   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.226259   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:34.226491   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:34.226686   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:34.226918   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:34.314782   68617 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 19:34:34.318940   68617 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 19:34:34.318966   68617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 19:34:34.319044   68617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 19:34:34.319122   68617 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 19:34:34.319223   68617 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 19:34:34.328873   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:34:34.351760   68617 start.go:296] duration metric: took 129.22587ms for postStartSetup
	I0719 19:34:34.351804   68617 fix.go:56] duration metric: took 20.367059026s for fixHost
	I0719 19:34:34.351821   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:34.354759   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.355195   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:34.355228   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.355405   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:34.355619   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:34.355821   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:34.355994   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:34.356154   68617 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:34.356371   68617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0719 19:34:34.356386   68617 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 19:34:34.468486   68617 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721417674.441309314
	
	I0719 19:34:34.468508   68617 fix.go:216] guest clock: 1721417674.441309314
	I0719 19:34:34.468514   68617 fix.go:229] Guest: 2024-07-19 19:34:34.441309314 +0000 UTC Remote: 2024-07-19 19:34:34.35180736 +0000 UTC m=+267.913178040 (delta=89.501954ms)
	I0719 19:34:34.468531   68617 fix.go:200] guest clock delta is within tolerance: 89.501954ms
	I0719 19:34:34.468535   68617 start.go:83] releasing machines lock for "embed-certs-889918", held for 20.483881359s
	I0719 19:34:34.468559   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:34.468844   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetIP
	I0719 19:34:34.471920   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.472311   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:34.472346   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.472517   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:34.473026   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:34.473229   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:34.473305   68617 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 19:34:34.473364   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:34.473484   68617 ssh_runner.go:195] Run: cat /version.json
	I0719 19:34:34.473513   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:34.476087   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.476262   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.476545   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:34.476571   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.476614   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:34.476834   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:34.476835   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.476988   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:34.477070   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:34.477226   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:34.477293   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:34.477383   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:34.477453   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:34.477536   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:34.560612   68617 ssh_runner.go:195] Run: systemctl --version
	I0719 19:34:34.593072   68617 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 19:34:34.734988   68617 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 19:34:34.741527   68617 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 19:34:34.741637   68617 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 19:34:34.758845   68617 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 19:34:34.758869   68617 start.go:495] detecting cgroup driver to use...
	I0719 19:34:34.758924   68617 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 19:34:34.774350   68617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 19:34:34.787003   68617 docker.go:217] disabling cri-docker service (if available) ...
	I0719 19:34:34.787074   68617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 19:34:34.801080   68617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 19:34:34.815679   68617 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 19:34:34.940970   68617 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 19:34:35.101668   68617 docker.go:233] disabling docker service ...
	I0719 19:34:35.101744   68617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 19:34:35.116074   68617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 19:34:35.132085   68617 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 19:34:35.254387   68617 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 19:34:35.380270   68617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 19:34:35.394387   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 19:34:35.411794   68617 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 19:34:35.411877   68617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.421927   68617 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 19:34:35.421978   68617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.432790   68617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.445416   68617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.457170   68617 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 19:34:35.467762   68617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.478775   68617 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.498894   68617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.509643   68617 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 19:34:35.520089   68617 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 19:34:35.520138   68617 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 19:34:35.532934   68617 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 19:34:35.543249   68617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:34:35.702678   68617 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 19:34:35.843858   68617 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 19:34:35.843944   68617 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 19:34:35.848599   68617 start.go:563] Will wait 60s for crictl version
	I0719 19:34:35.848677   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:34:35.852436   68617 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 19:34:35.889698   68617 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 19:34:35.889782   68617 ssh_runner.go:195] Run: crio --version
	I0719 19:34:35.917297   68617 ssh_runner.go:195] Run: crio --version
	I0719 19:34:35.948248   68617 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 19:34:35.949617   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetIP
	I0719 19:34:35.952938   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:35.953304   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:35.953340   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:35.953554   68617 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0719 19:34:35.957815   68617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:34:35.971246   68617 kubeadm.go:883] updating cluster {Name:embed-certs-889918 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-889918 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 19:34:35.971572   68617 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 19:34:35.971628   68617 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:34:36.012019   68617 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0719 19:34:36.012093   68617 ssh_runner.go:195] Run: which lz4
	I0719 19:34:36.017026   68617 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 19:34:36.021166   68617 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 19:34:36.021200   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0719 19:34:34.495371   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .Start
	I0719 19:34:34.495553   68995 main.go:141] libmachine: (old-k8s-version-570369) Ensuring networks are active...
	I0719 19:34:34.496391   68995 main.go:141] libmachine: (old-k8s-version-570369) Ensuring network default is active
	I0719 19:34:34.496864   68995 main.go:141] libmachine: (old-k8s-version-570369) Ensuring network mk-old-k8s-version-570369 is active
	I0719 19:34:34.497347   68995 main.go:141] libmachine: (old-k8s-version-570369) Getting domain xml...
	I0719 19:34:34.498147   68995 main.go:141] libmachine: (old-k8s-version-570369) Creating domain...
	I0719 19:34:35.772867   68995 main.go:141] libmachine: (old-k8s-version-570369) Waiting to get IP...
	I0719 19:34:35.773913   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:35.774431   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:35.774493   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:35.774403   69904 retry.go:31] will retry after 240.258158ms: waiting for machine to come up
	I0719 19:34:36.016002   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:36.016573   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:36.016602   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:36.016519   69904 retry.go:31] will retry after 386.689334ms: waiting for machine to come up
	I0719 19:34:36.405411   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:36.405914   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:36.405957   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:36.405885   69904 retry.go:31] will retry after 295.472664ms: waiting for machine to come up
	I0719 19:34:36.703587   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:36.704082   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:36.704113   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:36.704040   69904 retry.go:31] will retry after 436.936059ms: waiting for machine to come up
	I0719 19:34:34.233826   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:34:34.233873   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:37.351373   68617 crio.go:462] duration metric: took 1.334375789s to copy over tarball
	I0719 19:34:37.351439   68617 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 19:34:39.633138   68617 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.281647989s)
	I0719 19:34:39.633178   68617 crio.go:469] duration metric: took 2.281778927s to extract the tarball
	I0719 19:34:39.633187   68617 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 19:34:39.671219   68617 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:34:39.716854   68617 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 19:34:39.716878   68617 cache_images.go:84] Images are preloaded, skipping loading
	I0719 19:34:39.716885   68617 kubeadm.go:934] updating node { 192.168.61.107 8443 v1.30.3 crio true true} ...
	I0719 19:34:39.717014   68617 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-889918 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-889918 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 19:34:39.717090   68617 ssh_runner.go:195] Run: crio config
	I0719 19:34:39.764571   68617 cni.go:84] Creating CNI manager for ""
	I0719 19:34:39.764597   68617 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:34:39.764610   68617 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 19:34:39.764633   68617 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.107 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-889918 NodeName:embed-certs-889918 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 19:34:39.764796   68617 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-889918"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 19:34:39.764873   68617 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 19:34:39.774416   68617 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 19:34:39.774479   68617 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 19:34:39.783899   68617 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0719 19:34:39.799538   68617 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 19:34:39.815241   68617 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0719 19:34:39.831507   68617 ssh_runner.go:195] Run: grep 192.168.61.107	control-plane.minikube.internal$ /etc/hosts
	I0719 19:34:39.835145   68617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:34:39.846125   68617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:34:39.958886   68617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:34:39.975440   68617 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918 for IP: 192.168.61.107
	I0719 19:34:39.975469   68617 certs.go:194] generating shared ca certs ...
	I0719 19:34:39.975488   68617 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:34:39.975688   68617 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 19:34:39.975742   68617 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 19:34:39.975755   68617 certs.go:256] generating profile certs ...
	I0719 19:34:39.975891   68617 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/client.key
	I0719 19:34:39.975975   68617 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/apiserver.key.edbb2bee
	I0719 19:34:39.976027   68617 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/proxy-client.key
	I0719 19:34:39.976166   68617 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 19:34:39.976206   68617 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 19:34:39.976214   68617 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 19:34:39.976242   68617 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 19:34:39.976267   68617 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 19:34:39.976317   68617 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 19:34:39.976368   68617 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:34:39.976983   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 19:34:40.011234   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 19:34:40.057204   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 19:34:40.079930   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 19:34:40.102045   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0719 19:34:40.124479   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 19:34:40.153087   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 19:34:40.183427   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 19:34:40.206893   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 19:34:40.230582   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 19:34:40.254384   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 19:34:40.278454   68617 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 19:34:40.297538   68617 ssh_runner.go:195] Run: openssl version
	I0719 19:34:40.303579   68617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 19:34:40.315682   68617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 19:34:40.320283   68617 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 19:34:40.320356   68617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 19:34:40.326353   68617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 19:34:40.338564   68617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 19:34:40.349901   68617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:34:40.354229   68617 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:34:40.354291   68617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:34:40.359773   68617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 19:34:40.370212   68617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 19:34:40.383548   68617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 19:34:40.388929   68617 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 19:34:40.388981   68617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 19:34:40.396108   68617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 19:34:40.409065   68617 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 19:34:40.413154   68617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 19:34:40.418763   68617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 19:34:40.426033   68617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 19:34:40.431826   68617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 19:34:40.437469   68617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 19:34:40.443195   68617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 19:34:40.448932   68617 kubeadm.go:392] StartCluster: {Name:embed-certs-889918 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-889918 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:34:40.449086   68617 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 19:34:40.449153   68617 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:34:40.483766   68617 cri.go:89] found id: ""
	I0719 19:34:40.483867   68617 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 19:34:40.493593   68617 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 19:34:40.493618   68617 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 19:34:40.493668   68617 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 19:34:40.502750   68617 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 19:34:40.503917   68617 kubeconfig.go:125] found "embed-certs-889918" server: "https://192.168.61.107:8443"
	I0719 19:34:40.505701   68617 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 19:34:40.516036   68617 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.107
	I0719 19:34:40.516067   68617 kubeadm.go:1160] stopping kube-system containers ...
	I0719 19:34:40.516077   68617 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 19:34:40.516144   68617 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:34:40.562686   68617 cri.go:89] found id: ""
	I0719 19:34:40.562771   68617 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 19:34:40.578349   68617 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:34:40.587316   68617 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:34:40.587349   68617 kubeadm.go:157] found existing configuration files:
	
	I0719 19:34:40.587421   68617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:34:40.597112   68617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:34:40.597165   68617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:34:40.606941   68617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:34:40.616452   68617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:34:40.616495   68617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:34:40.626308   68617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:34:40.635642   68617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:34:40.635683   68617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:34:40.645499   68617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:34:40.655029   68617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:34:40.655080   68617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:34:40.665113   68617 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:34:40.673843   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:40.785511   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:37.142840   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:37.143334   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:37.143364   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:37.143287   69904 retry.go:31] will retry after 500.03346ms: waiting for machine to come up
	I0719 19:34:37.645270   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:37.645920   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:37.645949   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:37.645856   69904 retry.go:31] will retry after 803.488078ms: waiting for machine to come up
	I0719 19:34:38.450546   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:38.451043   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:38.451062   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:38.451000   69904 retry.go:31] will retry after 791.584627ms: waiting for machine to come up
	I0719 19:34:39.244154   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:39.244564   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:39.244597   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:39.244541   69904 retry.go:31] will retry after 1.090004215s: waiting for machine to come up
	I0719 19:34:40.335726   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:40.336278   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:40.336304   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:40.336240   69904 retry.go:31] will retry after 1.629231127s: waiting for machine to come up
	I0719 19:34:41.966947   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:41.967475   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:41.967497   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:41.967436   69904 retry.go:31] will retry after 2.06824211s: waiting for machine to come up
	I0719 19:34:39.234200   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:34:39.234245   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:41.592081   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:41.803419   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:41.898042   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:42.005198   68617 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:34:42.005290   68617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:42.505705   68617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:43.005532   68617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:43.505668   68617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:43.519708   68617 api_server.go:72] duration metric: took 1.514509053s to wait for apiserver process to appear ...
	I0719 19:34:43.519740   68617 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:34:43.519763   68617 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0719 19:34:46.288947   68617 api_server.go:279] https://192.168.61.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:34:46.288983   68617 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:34:46.289000   68617 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0719 19:34:46.306114   68617 api_server.go:279] https://192.168.61.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:34:46.306137   68617 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:34:44.038853   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:44.039330   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:44.039352   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:44.039303   69904 retry.go:31] will retry after 2.688374782s: waiting for machine to come up
	I0719 19:34:46.730865   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:46.731354   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:46.731377   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:46.731316   69904 retry.go:31] will retry after 3.054833322s: waiting for machine to come up
	I0719 19:34:46.519995   68617 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0719 19:34:46.524219   68617 api_server.go:279] https://192.168.61.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:34:46.524260   68617 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:34:47.020849   68617 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0719 19:34:47.029117   68617 api_server.go:279] https://192.168.61.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:34:47.029145   68617 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:34:47.520815   68617 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0719 19:34:47.525739   68617 api_server.go:279] https://192.168.61.107:8443/healthz returned 200:
	ok
	I0719 19:34:47.532240   68617 api_server.go:141] control plane version: v1.30.3
	I0719 19:34:47.532277   68617 api_server.go:131] duration metric: took 4.012528013s to wait for apiserver health ...
	I0719 19:34:47.532290   68617 cni.go:84] Creating CNI manager for ""
	I0719 19:34:47.532300   68617 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:34:47.533969   68617 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 19:34:44.235220   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:34:44.235292   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:44.372339   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": read tcp 192.168.72.1:52608->192.168.72.104:8444: read: connection reset by peer
	I0719 19:34:44.732722   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:44.733383   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": dial tcp 192.168.72.104:8444: connect: connection refused
	I0719 19:34:45.232769   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:47.535361   68617 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 19:34:47.545742   68617 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 19:34:47.570944   68617 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:34:47.582007   68617 system_pods.go:59] 8 kube-system pods found
	I0719 19:34:47.582046   68617 system_pods.go:61] "coredns-7db6d8ff4d-7h48w" [dd691a13-e571-4148-bef0-ba0552277312] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 19:34:47.582065   68617 system_pods.go:61] "etcd-embed-certs-889918" [68eae3c2-11de-4c70-bc28-1f34e1e868fc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 19:34:47.582075   68617 system_pods.go:61] "kube-apiserver-embed-certs-889918" [48f50846-bc9e-444e-a569-98d28550dcff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 19:34:47.582083   68617 system_pods.go:61] "kube-controller-manager-embed-certs-889918" [92e4da69-acef-44a2-9301-726c70294727] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 19:34:47.582091   68617 system_pods.go:61] "kube-proxy-5fcgs" [67ef345b-f2ef-4e18-a55c-91e2c08a37b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0719 19:34:47.582097   68617 system_pods.go:61] "kube-scheduler-embed-certs-889918" [3560778a-031a-486f-9f53-77c243737daf] Running
	I0719 19:34:47.582113   68617 system_pods.go:61] "metrics-server-569cc877fc-sfc85" [fc2a1efe-1728-42d9-bd3d-9eb29af783e9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:34:47.582121   68617 system_pods.go:61] "storage-provisioner" [32274055-6a2d-4cf9-8ae8-3c5f7cdc66db] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 19:34:47.582131   68617 system_pods.go:74] duration metric: took 11.164178ms to wait for pod list to return data ...
	I0719 19:34:47.582153   68617 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:34:47.588678   68617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:34:47.588713   68617 node_conditions.go:123] node cpu capacity is 2
	I0719 19:34:47.588732   68617 node_conditions.go:105] duration metric: took 6.572921ms to run NodePressure ...
	I0719 19:34:47.588751   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:47.869129   68617 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 19:34:47.875206   68617 kubeadm.go:739] kubelet initialised
	I0719 19:34:47.875228   68617 kubeadm.go:740] duration metric: took 6.075227ms waiting for restarted kubelet to initialise ...
	I0719 19:34:47.875239   68617 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:34:47.883454   68617 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:47.889101   68617 pod_ready.go:97] node "embed-certs-889918" hosting pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.889128   68617 pod_ready.go:81] duration metric: took 5.645208ms for pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:47.889149   68617 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-889918" hosting pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.889167   68617 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:47.894286   68617 pod_ready.go:97] node "embed-certs-889918" hosting pod "etcd-embed-certs-889918" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.894312   68617 pod_ready.go:81] duration metric: took 5.135667ms for pod "etcd-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:47.894321   68617 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-889918" hosting pod "etcd-embed-certs-889918" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.894327   68617 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:47.899401   68617 pod_ready.go:97] node "embed-certs-889918" hosting pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.899428   68617 pod_ready.go:81] duration metric: took 5.092368ms for pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:47.899435   68617 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-889918" hosting pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.899441   68617 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:47.976133   68617 pod_ready.go:97] node "embed-certs-889918" hosting pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.976170   68617 pod_ready.go:81] duration metric: took 76.718663ms for pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:47.976183   68617 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-889918" hosting pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.976193   68617 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5fcgs" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:48.374196   68617 pod_ready.go:97] node "embed-certs-889918" hosting pod "kube-proxy-5fcgs" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:48.374232   68617 pod_ready.go:81] duration metric: took 398.025945ms for pod "kube-proxy-5fcgs" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:48.374242   68617 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-889918" hosting pod "kube-proxy-5fcgs" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:48.374250   68617 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:48.574541   68617 pod_ready.go:97] error getting pod "kube-scheduler-embed-certs-889918" in "kube-system" namespace (skipping!): pods "kube-scheduler-embed-certs-889918" not found
	I0719 19:34:48.574575   68617 pod_ready.go:81] duration metric: took 200.319358ms for pod "kube-scheduler-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:48.574585   68617 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-embed-certs-889918" in "kube-system" namespace (skipping!): pods "kube-scheduler-embed-certs-889918" not found
	I0719 19:34:48.574591   68617 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:48.974614   68617 pod_ready.go:97] node "embed-certs-889918" hosting pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:48.974640   68617 pod_ready.go:81] duration metric: took 400.042539ms for pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:48.974648   68617 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-889918" hosting pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:48.974655   68617 pod_ready.go:38] duration metric: took 1.099407444s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:34:48.974672   68617 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 19:34:48.985044   68617 ops.go:34] apiserver oom_adj: -16
	I0719 19:34:48.985072   68617 kubeadm.go:597] duration metric: took 8.491446525s to restartPrimaryControlPlane
	I0719 19:34:48.985084   68617 kubeadm.go:394] duration metric: took 8.536160393s to StartCluster
	I0719 19:34:48.985117   68617 settings.go:142] acquiring lock: {Name:mkcf95271f2ac91a55c2f4ae0f8a0ccaa24777be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:34:48.985216   68617 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:34:48.986883   68617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/kubeconfig: {Name:mk0bbb5f0de0e816a151363ad4427bbf9de52f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:34:48.987110   68617 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 19:34:48.987179   68617 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 19:34:48.987252   68617 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-889918"
	I0719 19:34:48.987271   68617 addons.go:69] Setting default-storageclass=true in profile "embed-certs-889918"
	I0719 19:34:48.987287   68617 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-889918"
	W0719 19:34:48.987297   68617 addons.go:243] addon storage-provisioner should already be in state true
	I0719 19:34:48.987303   68617 addons.go:69] Setting metrics-server=true in profile "embed-certs-889918"
	I0719 19:34:48.987319   68617 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-889918"
	I0719 19:34:48.987331   68617 config.go:182] Loaded profile config "embed-certs-889918": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:34:48.987352   68617 host.go:66] Checking if "embed-certs-889918" exists ...
	I0719 19:34:48.987335   68617 addons.go:234] Setting addon metrics-server=true in "embed-certs-889918"
	W0719 19:34:48.987368   68617 addons.go:243] addon metrics-server should already be in state true
	I0719 19:34:48.987402   68617 host.go:66] Checking if "embed-certs-889918" exists ...
	I0719 19:34:48.987647   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:48.987684   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:48.987692   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:48.987701   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:48.987720   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:48.987723   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:48.988905   68617 out.go:177] * Verifying Kubernetes components...
	I0719 19:34:48.990196   68617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:34:49.003003   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38653
	I0719 19:34:49.003016   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46849
	I0719 19:34:49.003134   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38559
	I0719 19:34:49.003399   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.003454   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.003518   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.003947   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.003967   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.004007   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.004023   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.004100   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.004115   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.004295   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.004351   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.004444   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.004512   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetState
	I0719 19:34:49.004785   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:49.004819   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:49.005005   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:49.005037   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:49.008189   68617 addons.go:234] Setting addon default-storageclass=true in "embed-certs-889918"
	W0719 19:34:49.008212   68617 addons.go:243] addon default-storageclass should already be in state true
	I0719 19:34:49.008242   68617 host.go:66] Checking if "embed-certs-889918" exists ...
	I0719 19:34:49.008617   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:49.008649   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:49.020066   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38847
	I0719 19:34:49.020085   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40037
	I0719 19:34:49.020445   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.020502   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.020882   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.020899   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.021021   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.021049   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.021278   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.021313   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.021473   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetState
	I0719 19:34:49.021511   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetState
	I0719 19:34:49.023274   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:49.023416   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:49.025350   68617 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:34:49.025361   68617 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0719 19:34:49.025824   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32781
	I0719 19:34:49.026239   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.026584   68617 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 19:34:49.026603   68617 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 19:34:49.026621   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:49.026679   68617 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:34:49.026696   68617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 19:34:49.026712   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:49.027527   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.027544   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.027893   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.028407   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:49.028439   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:49.030553   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:49.030579   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:49.031224   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:49.031246   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:49.031274   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:49.031279   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:49.031298   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:49.031375   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:49.031477   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:49.031659   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:49.031717   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:49.031867   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:49.032146   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:49.032524   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:49.043850   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37515
	I0719 19:34:49.044198   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.044657   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.044679   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.045019   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.045222   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetState
	I0719 19:34:49.046751   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:49.046979   68617 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 19:34:49.046997   68617 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 19:34:49.047014   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:49.049718   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:49.050184   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:49.050210   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:49.050371   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:49.050520   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:49.050670   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:49.050852   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:49.167388   68617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:34:49.183694   68617 node_ready.go:35] waiting up to 6m0s for node "embed-certs-889918" to be "Ready" ...
	I0719 19:34:49.290895   68617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:34:49.298027   68617 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 19:34:49.298047   68617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0719 19:34:49.313881   68617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 19:34:49.319432   68617 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 19:34:49.319459   68617 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 19:34:49.357567   68617 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 19:34:49.357591   68617 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 19:34:49.385245   68617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 19:34:50.267101   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.267126   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.267145   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.267136   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.267209   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.267226   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.267407   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.267419   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.267427   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.267434   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.267506   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Closing plugin on server side
	I0719 19:34:50.267527   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Closing plugin on server side
	I0719 19:34:50.267546   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.267553   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.267561   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.267567   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.267576   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.267601   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.267619   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.267630   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.267630   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Closing plugin on server side
	I0719 19:34:50.267652   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.267660   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.267890   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Closing plugin on server side
	I0719 19:34:50.267920   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.267933   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.267945   68617 addons.go:475] Verifying addon metrics-server=true in "embed-certs-889918"
	I0719 19:34:50.268025   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Closing plugin on server side
	I0719 19:34:50.268046   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.268053   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.273372   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.273398   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.273607   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.273621   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.275547   68617 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0719 19:34:50.276813   68617 addons.go:510] duration metric: took 1.289644945s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0719 19:34:51.187125   68617 node_ready.go:53] node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:49.787569   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:49.788100   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:49.788136   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:49.788058   69904 retry.go:31] will retry after 4.206772692s: waiting for machine to come up
	I0719 19:34:50.233851   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:34:50.233900   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:55.468495   68019 start.go:364] duration metric: took 55.077199048s to acquireMachinesLock for "no-preload-971041"
	I0719 19:34:55.468549   68019 start.go:96] Skipping create...Using existing machine configuration
	I0719 19:34:55.468561   68019 fix.go:54] fixHost starting: 
	I0719 19:34:55.469011   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:55.469045   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:55.488854   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32977
	I0719 19:34:55.489237   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:55.489746   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:34:55.489769   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:55.490164   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:55.490353   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:34:55.490535   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetState
	I0719 19:34:55.492080   68019 fix.go:112] recreateIfNeeded on no-preload-971041: state=Stopped err=<nil>
	I0719 19:34:55.492100   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	W0719 19:34:55.492240   68019 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 19:34:55.494051   68019 out.go:177] * Restarting existing kvm2 VM for "no-preload-971041" ...
	I0719 19:34:53.189961   68617 node_ready.go:53] node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:55.688298   68617 node_ready.go:53] node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:53.999402   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.000105   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has current primary IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.000129   68995 main.go:141] libmachine: (old-k8s-version-570369) Found IP for machine: 192.168.39.220
	I0719 19:34:54.000144   68995 main.go:141] libmachine: (old-k8s-version-570369) Reserving static IP address...
	I0719 19:34:54.000535   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "old-k8s-version-570369", mac: "52:54:00:80:e3:2b", ip: "192.168.39.220"} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.000738   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | skip adding static IP to network mk-old-k8s-version-570369 - found existing host DHCP lease matching {name: "old-k8s-version-570369", mac: "52:54:00:80:e3:2b", ip: "192.168.39.220"}
	I0719 19:34:54.000786   68995 main.go:141] libmachine: (old-k8s-version-570369) Reserved static IP address: 192.168.39.220
	I0719 19:34:54.000799   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | Getting to WaitForSSH function...
	I0719 19:34:54.000819   68995 main.go:141] libmachine: (old-k8s-version-570369) Waiting for SSH to be available...
	I0719 19:34:54.003082   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.003489   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.003519   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.003700   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | Using SSH client type: external
	I0719 19:34:54.003733   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | Using SSH private key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa (-rw-------)
	I0719 19:34:54.003769   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.220 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 19:34:54.003786   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | About to run SSH command:
	I0719 19:34:54.003802   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | exit 0
	I0719 19:34:54.131809   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | SSH cmd err, output: <nil>: 
	I0719 19:34:54.132235   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetConfigRaw
	I0719 19:34:54.132946   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetIP
	I0719 19:34:54.135584   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.135977   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.136011   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.136256   68995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/config.json ...
	I0719 19:34:54.136443   68995 machine.go:94] provisionDockerMachine start ...
	I0719 19:34:54.136465   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:54.136815   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.139347   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.139705   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.139733   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.139951   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:54.140132   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.140272   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.140567   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:54.140754   68995 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:54.141006   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:34:54.141018   68995 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 19:34:54.251997   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 19:34:54.252035   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetMachineName
	I0719 19:34:54.252283   68995 buildroot.go:166] provisioning hostname "old-k8s-version-570369"
	I0719 19:34:54.252310   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetMachineName
	I0719 19:34:54.252568   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.255414   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.255812   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.255860   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.256041   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:54.256232   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.256399   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.256586   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:54.256793   68995 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:54.257004   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:34:54.257019   68995 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-570369 && echo "old-k8s-version-570369" | sudo tee /etc/hostname
	I0719 19:34:54.381363   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-570369
	
	I0719 19:34:54.381405   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.384406   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.384697   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.384724   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.384866   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:54.385050   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.385226   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.385399   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:54.385554   68995 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:54.385735   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:34:54.385752   68995 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-570369' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-570369/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-570369' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 19:34:54.504713   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:34:54.504740   68995 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 19:34:54.504778   68995 buildroot.go:174] setting up certificates
	I0719 19:34:54.504786   68995 provision.go:84] configureAuth start
	I0719 19:34:54.504796   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetMachineName
	I0719 19:34:54.505040   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetIP
	I0719 19:34:54.507978   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.508375   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.508407   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.508594   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.510609   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.510923   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.510952   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.511003   68995 provision.go:143] copyHostCerts
	I0719 19:34:54.511068   68995 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 19:34:54.511079   68995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 19:34:54.511147   68995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 19:34:54.511250   68995 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 19:34:54.511260   68995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 19:34:54.511290   68995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 19:34:54.511365   68995 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 19:34:54.511374   68995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 19:34:54.511404   68995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 19:34:54.511470   68995 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-570369 san=[127.0.0.1 192.168.39.220 localhost minikube old-k8s-version-570369]
	I0719 19:34:54.759014   68995 provision.go:177] copyRemoteCerts
	I0719 19:34:54.759064   68995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 19:34:54.759089   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.761770   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.762089   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.762133   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.762262   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:54.762483   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.762633   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:54.762744   68995 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa Username:docker}
	I0719 19:34:54.849819   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0719 19:34:54.874113   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 19:34:54.897979   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 19:34:54.921635   68995 provision.go:87] duration metric: took 416.834667ms to configureAuth
	I0719 19:34:54.921669   68995 buildroot.go:189] setting minikube options for container-runtime
	I0719 19:34:54.921904   68995 config.go:182] Loaded profile config "old-k8s-version-570369": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0719 19:34:54.922004   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.924798   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.925091   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.925121   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.925308   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:54.925611   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.925801   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.925980   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:54.926180   68995 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:54.926442   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:34:54.926471   68995 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 19:34:55.219746   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 19:34:55.219774   68995 machine.go:97] duration metric: took 1.083316626s to provisionDockerMachine
	I0719 19:34:55.219788   68995 start.go:293] postStartSetup for "old-k8s-version-570369" (driver="kvm2")
	I0719 19:34:55.219802   68995 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 19:34:55.219823   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:55.220207   68995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 19:34:55.220250   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:55.223470   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.223803   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:55.223847   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.224015   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:55.224253   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:55.224507   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:55.224686   68995 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa Username:docker}
	I0719 19:34:55.314430   68995 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 19:34:55.318435   68995 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 19:34:55.318461   68995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 19:34:55.318573   68995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 19:34:55.318696   68995 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 19:34:55.318818   68995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 19:34:55.329180   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:34:55.351452   68995 start.go:296] duration metric: took 131.650561ms for postStartSetup
	I0719 19:34:55.351493   68995 fix.go:56] duration metric: took 20.882796261s for fixHost
	I0719 19:34:55.351516   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:55.354069   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.354429   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:55.354449   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.354612   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:55.354786   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:55.354936   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:55.355083   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:55.355295   68995 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:55.355516   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:34:55.355528   68995 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 19:34:55.468304   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721417695.444458011
	
	I0719 19:34:55.468347   68995 fix.go:216] guest clock: 1721417695.444458011
	I0719 19:34:55.468357   68995 fix.go:229] Guest: 2024-07-19 19:34:55.444458011 +0000 UTC Remote: 2024-07-19 19:34:55.351498388 +0000 UTC m=+233.383860409 (delta=92.959623ms)
	I0719 19:34:55.468400   68995 fix.go:200] guest clock delta is within tolerance: 92.959623ms
	I0719 19:34:55.468407   68995 start.go:83] releasing machines lock for "old-k8s-version-570369", held for 20.999740081s
	I0719 19:34:55.468436   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:55.468696   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetIP
	I0719 19:34:55.471650   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.472087   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:55.472126   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.472376   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:55.472870   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:55.473063   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:55.473145   68995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 19:34:55.473190   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:55.473364   68995 ssh_runner.go:195] Run: cat /version.json
	I0719 19:34:55.473392   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:55.476321   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.476682   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.476716   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:55.476736   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.476847   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:55.477024   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:55.477123   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:55.477149   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.477183   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:55.477291   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:55.477341   68995 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa Username:docker}
	I0719 19:34:55.477440   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:55.477555   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:55.477729   68995 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa Username:docker}
	I0719 19:34:55.573427   68995 ssh_runner.go:195] Run: systemctl --version
	I0719 19:34:55.602019   68995 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 19:34:55.759356   68995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 19:34:55.766310   68995 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 19:34:55.766406   68995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 19:34:55.786237   68995 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 19:34:55.786261   68995 start.go:495] detecting cgroup driver to use...
	I0719 19:34:55.786335   68995 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 19:34:55.804512   68995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 19:34:55.818026   68995 docker.go:217] disabling cri-docker service (if available) ...
	I0719 19:34:55.818081   68995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 19:34:55.832737   68995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 19:34:55.848018   68995 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 19:34:55.972092   68995 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 19:34:56.119305   68995 docker.go:233] disabling docker service ...
	I0719 19:34:56.119390   68995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 19:34:56.132814   68995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 19:34:56.145281   68995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 19:34:56.285671   68995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 19:34:56.413786   68995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 19:34:56.428220   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 19:34:56.447626   68995 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0719 19:34:56.447679   68995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:56.458325   68995 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 19:34:56.458419   68995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:56.467977   68995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:56.477127   68995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:56.487342   68995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 19:34:56.499618   68995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 19:34:56.510085   68995 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 19:34:56.510149   68995 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 19:34:56.524045   68995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 19:34:56.533806   68995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:34:56.671207   68995 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 19:34:56.844623   68995 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 19:34:56.844688   68995 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 19:34:56.849762   68995 start.go:563] Will wait 60s for crictl version
	I0719 19:34:56.849829   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:34:56.853768   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 19:34:56.896644   68995 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 19:34:56.896768   68995 ssh_runner.go:195] Run: crio --version
	I0719 19:34:56.924343   68995 ssh_runner.go:195] Run: crio --version
	I0719 19:34:56.952939   68995 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0719 19:34:56.954228   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetIP
	I0719 19:34:56.957173   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:56.957539   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:56.957571   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:56.957784   68995 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 19:34:56.961877   68995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:34:56.973917   68995 kubeadm.go:883] updating cluster {Name:old-k8s-version-570369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-570369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 19:34:56.974058   68995 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 19:34:56.974286   68995 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:34:55.495295   68019 main.go:141] libmachine: (no-preload-971041) Calling .Start
	I0719 19:34:55.495475   68019 main.go:141] libmachine: (no-preload-971041) Ensuring networks are active...
	I0719 19:34:55.496223   68019 main.go:141] libmachine: (no-preload-971041) Ensuring network default is active
	I0719 19:34:55.496554   68019 main.go:141] libmachine: (no-preload-971041) Ensuring network mk-no-preload-971041 is active
	I0719 19:34:55.496971   68019 main.go:141] libmachine: (no-preload-971041) Getting domain xml...
	I0719 19:34:55.497761   68019 main.go:141] libmachine: (no-preload-971041) Creating domain...
	I0719 19:34:56.870332   68019 main.go:141] libmachine: (no-preload-971041) Waiting to get IP...
	I0719 19:34:56.871429   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:56.871920   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:56.871979   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:56.871888   70115 retry.go:31] will retry after 201.799148ms: waiting for machine to come up
	I0719 19:34:57.075216   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:57.075744   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:57.075774   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:57.075692   70115 retry.go:31] will retry after 311.790253ms: waiting for machine to come up
	I0719 19:34:57.389280   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:57.389811   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:57.389836   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:57.389752   70115 retry.go:31] will retry after 434.42849ms: waiting for machine to come up
	I0719 19:34:57.826106   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:57.826558   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:57.826591   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:57.826524   70115 retry.go:31] will retry after 416.820957ms: waiting for machine to come up
	I0719 19:34:55.234692   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:34:55.234729   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:56.690525   68617 node_ready.go:49] node "embed-certs-889918" has status "Ready":"True"
	I0719 19:34:56.690548   68617 node_ready.go:38] duration metric: took 7.506818962s for node "embed-certs-889918" to be "Ready" ...
	I0719 19:34:56.690557   68617 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:34:56.697237   68617 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:56.702754   68617 pod_ready.go:92] pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace has status "Ready":"True"
	I0719 19:34:56.702775   68617 pod_ready.go:81] duration metric: took 5.512702ms for pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:56.702784   68617 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:58.709004   68617 pod_ready.go:102] pod "etcd-embed-certs-889918" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:00.709279   68617 pod_ready.go:92] pod "etcd-embed-certs-889918" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:00.709305   68617 pod_ready.go:81] duration metric: took 4.00651453s for pod "etcd-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.709317   68617 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.715390   68617 pod_ready.go:92] pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:00.715416   68617 pod_ready.go:81] duration metric: took 6.089348ms for pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.715428   68617 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.720718   68617 pod_ready.go:92] pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:00.720754   68617 pod_ready.go:81] duration metric: took 5.316049ms for pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.720768   68617 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5fcgs" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.725631   68617 pod_ready.go:92] pod "kube-proxy-5fcgs" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:00.725660   68617 pod_ready.go:81] duration metric: took 4.884075ms for pod "kube-proxy-5fcgs" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.725671   68617 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:57.028362   68995 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 19:34:57.028462   68995 ssh_runner.go:195] Run: which lz4
	I0719 19:34:57.032617   68995 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 19:34:57.037102   68995 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 19:34:57.037142   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0719 19:34:58.509147   68995 crio.go:462] duration metric: took 1.476562763s to copy over tarball
	I0719 19:34:58.509223   68995 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 19:35:01.576920   68995 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.067667619s)
	I0719 19:35:01.576949   68995 crio.go:469] duration metric: took 3.067772722s to extract the tarball
	I0719 19:35:01.576959   68995 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 19:35:01.622394   68995 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:35:01.658103   68995 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 19:35:01.658135   68995 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 19:35:01.658233   68995 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:01.658257   68995 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:35:01.658262   68995 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:35:01.658236   68995 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0719 19:35:01.658235   68995 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0719 19:35:01.658233   68995 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:35:01.658247   68995 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:35:01.658433   68995 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0719 19:35:01.660336   68995 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0719 19:35:01.660361   68995 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:35:01.660343   68995 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:35:01.660333   68995 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:35:01.660339   68995 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0719 19:35:01.660340   68995 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:35:01.660333   68995 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:01.660714   68995 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0719 19:35:01.895704   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0719 19:35:01.907901   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:35:01.908071   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:35:01.923590   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0719 19:35:01.923831   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:35:01.955327   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:35:01.983003   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0719 19:35:00.235814   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:35:00.235880   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:35:00.893649   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:35:00.893683   68536 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:35:00.893697   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:35:00.916935   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:35:00.916965   68536 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:35:01.233380   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:35:01.242031   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:35:01.242062   68536 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:35:01.732567   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:35:01.737153   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:35:01.737201   68536 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:35:02.232697   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:35:02.238729   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:35:02.238763   68536 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:35:02.733474   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:35:02.738862   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 200:
	ok
	I0719 19:35:02.746708   68536 api_server.go:141] control plane version: v1.30.3
	I0719 19:35:02.746736   68536 api_server.go:131] duration metric: took 38.514241141s to wait for apiserver health ...
	I0719 19:35:02.746748   68536 cni.go:84] Creating CNI manager for ""
	I0719 19:35:02.746757   68536 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:34:58.245957   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:58.246426   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:58.246451   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:58.246364   70115 retry.go:31] will retry after 682.234308ms: waiting for machine to come up
	I0719 19:34:58.930013   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:58.930623   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:58.930642   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:58.930578   70115 retry.go:31] will retry after 798.20186ms: waiting for machine to come up
	I0719 19:34:59.730628   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:59.731191   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:59.731213   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:59.731139   70115 retry.go:31] will retry after 911.96578ms: waiting for machine to come up
	I0719 19:35:00.644805   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:00.645291   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:35:00.645326   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:35:00.645245   70115 retry.go:31] will retry after 1.252978155s: waiting for machine to come up
	I0719 19:35:01.900101   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:01.900824   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:35:01.900848   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:35:01.900727   70115 retry.go:31] will retry after 1.662309997s: waiting for machine to come up
	I0719 19:35:02.966644   68536 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 19:35:03.080565   68536 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 19:35:03.099289   68536 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 19:35:03.123741   68536 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:35:03.243633   68536 system_pods.go:59] 8 kube-system pods found
	I0719 19:35:03.243729   68536 system_pods.go:61] "coredns-7db6d8ff4d-tngnj" [e343f406-a7bc-46e8-986a-a55d3579c8d2] Running
	I0719 19:35:03.243751   68536 system_pods.go:61] "etcd-default-k8s-diff-port-578533" [f800aab8-97bf-4080-b2ec-bac5e070cd14] Running
	I0719 19:35:03.243767   68536 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-578533" [afe884c0-eee2-4c52-b01f-c106afb9a244] Running
	I0719 19:35:03.243795   68536 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-578533" [ecece9ad-d0ed-4cad-b01e-d7254df00dfa] Running
	I0719 19:35:03.243816   68536 system_pods.go:61] "kube-proxy-cdb66" [1daaaaf5-4c7d-4c37-86f7-b52d11621ffa] Running
	I0719 19:35:03.243850   68536 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-578533" [0359836d-2099-43f7-9dae-7c54755818d4] Running
	I0719 19:35:03.243874   68536 system_pods.go:61] "metrics-server-569cc877fc-wf82d" [8ae7709a-082d-4279-9d78-44940b2bdae4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:35:03.243889   68536 system_pods.go:61] "storage-provisioner" [767f30f2-20d0-4340-b70b-0570d42fa007] Running
	I0719 19:35:03.243909   68536 system_pods.go:74] duration metric: took 120.145691ms to wait for pod list to return data ...
	I0719 19:35:03.243942   68536 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:35:02.733443   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:04.734486   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:02.004356   68995 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0719 19:35:02.004410   68995 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0719 19:35:02.004451   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.076774   68995 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0719 19:35:02.076820   68995 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:35:02.076871   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.077195   68995 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0719 19:35:02.077230   68995 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:35:02.077274   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.094076   68995 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0719 19:35:02.094105   68995 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0719 19:35:02.094118   68995 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:35:02.094148   68995 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0719 19:35:02.094169   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.094186   68995 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:35:02.094156   68995 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0719 19:35:02.094233   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.094246   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.100381   68995 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0719 19:35:02.100411   68995 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0719 19:35:02.100416   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0719 19:35:02.100449   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.100545   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:35:02.100547   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:35:02.106653   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:35:02.106700   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0719 19:35:02.106781   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:35:02.214163   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0719 19:35:02.214249   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0719 19:35:02.216256   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0719 19:35:02.216348   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0719 19:35:02.247645   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0719 19:35:02.247723   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0719 19:35:02.247818   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0719 19:35:02.265494   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0719 19:35:02.465885   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:02.607013   68995 cache_images.go:92] duration metric: took 948.858667ms to LoadCachedImages
	W0719 19:35:02.607143   68995 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0719 19:35:02.607167   68995 kubeadm.go:934] updating node { 192.168.39.220 8443 v1.20.0 crio true true} ...
	I0719 19:35:02.607303   68995 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-570369 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 19:35:02.607399   68995 ssh_runner.go:195] Run: crio config
	I0719 19:35:02.661274   68995 cni.go:84] Creating CNI manager for ""
	I0719 19:35:02.661306   68995 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:35:02.661326   68995 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 19:35:02.661353   68995 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.220 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-570369 NodeName:old-k8s-version-570369 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.220"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.220 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0719 19:35:02.661525   68995 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.220
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-570369"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.220
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.220"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 19:35:02.661599   68995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0719 19:35:02.671792   68995 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 19:35:02.671878   68995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 19:35:02.683593   68995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0719 19:35:02.703448   68995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 19:35:02.723494   68995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0719 19:35:02.741880   68995 ssh_runner.go:195] Run: grep 192.168.39.220	control-plane.minikube.internal$ /etc/hosts
	I0719 19:35:02.745968   68995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.220	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:35:02.759015   68995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:35:02.894896   68995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:35:02.912243   68995 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369 for IP: 192.168.39.220
	I0719 19:35:02.912272   68995 certs.go:194] generating shared ca certs ...
	I0719 19:35:02.912295   68995 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:35:02.912468   68995 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 19:35:02.912520   68995 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 19:35:02.912532   68995 certs.go:256] generating profile certs ...
	I0719 19:35:02.912660   68995 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/client.key
	I0719 19:35:02.912732   68995 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.key.41bb110d
	I0719 19:35:02.912784   68995 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/proxy-client.key
	I0719 19:35:02.912919   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 19:35:02.912959   68995 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 19:35:02.912973   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 19:35:02.913006   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 19:35:02.913040   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 19:35:02.913070   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 19:35:02.913125   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:35:02.914001   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 19:35:02.962668   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 19:35:02.997316   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 19:35:03.033868   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 19:35:03.073913   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0719 19:35:03.120607   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 19:35:03.156647   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 19:35:03.196763   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 19:35:03.233170   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 19:35:03.260949   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 19:35:03.286207   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 19:35:03.312158   68995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 19:35:03.333544   68995 ssh_runner.go:195] Run: openssl version
	I0719 19:35:03.339422   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 19:35:03.353775   68995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 19:35:03.359494   68995 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 19:35:03.359597   68995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 19:35:03.365926   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 19:35:03.380164   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 19:35:03.394069   68995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 19:35:03.398358   68995 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 19:35:03.398410   68995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 19:35:03.404158   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 19:35:03.415159   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 19:35:03.425887   68995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:35:03.430584   68995 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:35:03.430641   68995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:35:03.436201   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 19:35:03.448497   68995 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 19:35:03.454026   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 19:35:03.461326   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 19:35:03.467492   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 19:35:03.473959   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 19:35:03.480337   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 19:35:03.487652   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 19:35:03.494941   68995 kubeadm.go:392] StartCluster: {Name:old-k8s-version-570369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-570369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:35:03.495049   68995 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 19:35:03.495095   68995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:35:03.539527   68995 cri.go:89] found id: ""
	I0719 19:35:03.539603   68995 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 19:35:03.550212   68995 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 19:35:03.550232   68995 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 19:35:03.550290   68995 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 19:35:03.562250   68995 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 19:35:03.563624   68995 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-570369" does not appear in /home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:35:03.564665   68995 kubeconfig.go:62] /home/jenkins/minikube-integration/19307-14841/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-570369" cluster setting kubeconfig missing "old-k8s-version-570369" context setting]
	I0719 19:35:03.566027   68995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/kubeconfig: {Name:mk0bbb5f0de0e816a151363ad4427bbf9de52f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:35:03.639264   68995 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 19:35:03.651651   68995 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.220
	I0719 19:35:03.651684   68995 kubeadm.go:1160] stopping kube-system containers ...
	I0719 19:35:03.651698   68995 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 19:35:03.651753   68995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:35:03.694507   68995 cri.go:89] found id: ""
	I0719 19:35:03.694575   68995 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 19:35:03.713478   68995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:35:03.724717   68995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:35:03.724744   68995 kubeadm.go:157] found existing configuration files:
	
	I0719 19:35:03.724803   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:35:03.736032   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:35:03.736099   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:35:03.746972   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:35:03.757246   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:35:03.757384   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:35:03.766980   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:35:03.776428   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:35:03.776493   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:35:03.788238   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:35:03.799163   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:35:03.799230   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:35:03.810607   68995 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:35:03.820069   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:04.035635   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:04.678453   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:04.924967   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:05.021723   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:05.118353   68995 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:35:05.118439   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:05.618699   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:06.119206   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:06.618828   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:03.565550   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:03.566079   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:35:03.566106   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:35:03.566042   70115 retry.go:31] will retry after 1.551544013s: waiting for machine to come up
	I0719 19:35:05.119602   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:05.120126   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:35:05.120147   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:35:05.120073   70115 retry.go:31] will retry after 2.205398899s: waiting for machine to come up
	I0719 19:35:07.327911   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:07.328428   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:35:07.328451   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:35:07.328390   70115 retry.go:31] will retry after 3.255187102s: waiting for machine to come up
	I0719 19:35:03.641957   68536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:35:03.641987   68536 node_conditions.go:123] node cpu capacity is 2
	I0719 19:35:03.642001   68536 node_conditions.go:105] duration metric: took 398.042486ms to run NodePressure ...
	I0719 19:35:03.642020   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:04.658145   68536 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.016096931s)
	I0719 19:35:04.658182   68536 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 19:35:04.664125   68536 retry.go:31] will retry after 336.601067ms: kubelet not initialised
	I0719 19:35:05.007823   68536 retry.go:31] will retry after 366.441912ms: kubelet not initialised
	I0719 19:35:05.379510   68536 retry.go:31] will retry after 463.132181ms: kubelet not initialised
	I0719 19:35:05.848603   68536 retry.go:31] will retry after 1.07509848s: kubelet not initialised
	I0719 19:35:06.931674   68536 kubeadm.go:739] kubelet initialised
	I0719 19:35:06.931699   68536 kubeadm.go:740] duration metric: took 2.273506864s waiting for restarted kubelet to initialise ...
	I0719 19:35:06.931706   68536 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:35:06.941748   68536 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-tngnj" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:07.231890   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:09.732362   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:07.119022   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:07.619075   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:08.119512   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:08.618875   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:09.118565   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:09.619032   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:10.118774   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:10.618802   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:11.119561   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:11.618914   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:10.585455   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:10.585919   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:35:10.585946   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:35:10.585867   70115 retry.go:31] will retry after 3.911827583s: waiting for machine to come up
	I0719 19:35:08.947740   68536 pod_ready.go:102] pod "coredns-7db6d8ff4d-tngnj" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:10.959978   68536 pod_ready.go:102] pod "coredns-7db6d8ff4d-tngnj" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:12.449422   68536 pod_ready.go:92] pod "coredns-7db6d8ff4d-tngnj" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:12.449445   68536 pod_ready.go:81] duration metric: took 5.507672716s for pod "coredns-7db6d8ff4d-tngnj" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:12.449459   68536 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:12.455485   68536 pod_ready.go:92] pod "etcd-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:12.455513   68536 pod_ready.go:81] duration metric: took 6.046494ms for pod "etcd-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:12.455524   68536 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:12.461238   68536 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:12.461259   68536 pod_ready.go:81] duration metric: took 5.727166ms for pod "kube-apiserver-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:12.461268   68536 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:12.230775   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:14.232074   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:16.232335   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:12.119049   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:12.619110   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:13.119295   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:13.619413   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:14.118909   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:14.619324   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:15.118964   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:15.618534   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:16.119209   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:16.619370   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:14.500920   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.501377   68019 main.go:141] libmachine: (no-preload-971041) Found IP for machine: 192.168.50.119
	I0719 19:35:14.501409   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has current primary IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.501420   68019 main.go:141] libmachine: (no-preload-971041) Reserving static IP address...
	I0719 19:35:14.501837   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "no-preload-971041", mac: "52:54:00:72:b8:bd", ip: "192.168.50.119"} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.501869   68019 main.go:141] libmachine: (no-preload-971041) Reserved static IP address: 192.168.50.119
	I0719 19:35:14.501888   68019 main.go:141] libmachine: (no-preload-971041) DBG | skip adding static IP to network mk-no-preload-971041 - found existing host DHCP lease matching {name: "no-preload-971041", mac: "52:54:00:72:b8:bd", ip: "192.168.50.119"}
	I0719 19:35:14.501907   68019 main.go:141] libmachine: (no-preload-971041) DBG | Getting to WaitForSSH function...
	I0719 19:35:14.501923   68019 main.go:141] libmachine: (no-preload-971041) Waiting for SSH to be available...
	I0719 19:35:14.504106   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.504417   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.504451   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.504563   68019 main.go:141] libmachine: (no-preload-971041) DBG | Using SSH client type: external
	I0719 19:35:14.504609   68019 main.go:141] libmachine: (no-preload-971041) DBG | Using SSH private key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa (-rw-------)
	I0719 19:35:14.504650   68019 main.go:141] libmachine: (no-preload-971041) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.119 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 19:35:14.504691   68019 main.go:141] libmachine: (no-preload-971041) DBG | About to run SSH command:
	I0719 19:35:14.504710   68019 main.go:141] libmachine: (no-preload-971041) DBG | exit 0
	I0719 19:35:14.624083   68019 main.go:141] libmachine: (no-preload-971041) DBG | SSH cmd err, output: <nil>: 
	I0719 19:35:14.624404   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetConfigRaw
	I0719 19:35:14.625152   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetIP
	I0719 19:35:14.628074   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.628431   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.628460   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.628756   68019 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/config.json ...
	I0719 19:35:14.628949   68019 machine.go:94] provisionDockerMachine start ...
	I0719 19:35:14.628967   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:35:14.629217   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:14.631862   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.632252   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.632280   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.632505   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:14.632703   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:14.632878   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:14.633052   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:14.633256   68019 main.go:141] libmachine: Using SSH client type: native
	I0719 19:35:14.633484   68019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.119 22 <nil> <nil>}
	I0719 19:35:14.633500   68019 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 19:35:14.732493   68019 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 19:35:14.732533   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetMachineName
	I0719 19:35:14.732918   68019 buildroot.go:166] provisioning hostname "no-preload-971041"
	I0719 19:35:14.732943   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetMachineName
	I0719 19:35:14.733124   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:14.735603   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.735962   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.735998   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.736203   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:14.736432   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:14.736617   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:14.736847   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:14.737046   68019 main.go:141] libmachine: Using SSH client type: native
	I0719 19:35:14.737264   68019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.119 22 <nil> <nil>}
	I0719 19:35:14.737280   68019 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-971041 && echo "no-preload-971041" | sudo tee /etc/hostname
	I0719 19:35:14.850040   68019 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-971041
	
	I0719 19:35:14.850066   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:14.853200   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.853584   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.853618   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.854242   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:14.854505   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:14.854720   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:14.854889   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:14.855089   68019 main.go:141] libmachine: Using SSH client type: native
	I0719 19:35:14.855321   68019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.119 22 <nil> <nil>}
	I0719 19:35:14.855344   68019 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-971041' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-971041/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-971041' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 19:35:14.959910   68019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:35:14.959942   68019 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 19:35:14.959969   68019 buildroot.go:174] setting up certificates
	I0719 19:35:14.959985   68019 provision.go:84] configureAuth start
	I0719 19:35:14.959997   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetMachineName
	I0719 19:35:14.960252   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetIP
	I0719 19:35:14.963262   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.963701   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.963732   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.963997   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:14.966731   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.967155   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.967176   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.967343   68019 provision.go:143] copyHostCerts
	I0719 19:35:14.967418   68019 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 19:35:14.967435   68019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 19:35:14.967506   68019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 19:35:14.967642   68019 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 19:35:14.967657   68019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 19:35:14.967696   68019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 19:35:14.967783   68019 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 19:35:14.967796   68019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 19:35:14.967826   68019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 19:35:14.967924   68019 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.no-preload-971041 san=[127.0.0.1 192.168.50.119 localhost minikube no-preload-971041]
	I0719 19:35:15.077149   68019 provision.go:177] copyRemoteCerts
	I0719 19:35:15.077207   68019 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 19:35:15.077229   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:15.079901   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.080234   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.080265   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.080489   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:15.080694   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.080897   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:15.081069   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:35:15.161684   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 19:35:15.186551   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0719 19:35:15.208816   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 19:35:15.233238   68019 provision.go:87] duration metric: took 273.242567ms to configureAuth
	I0719 19:35:15.233264   68019 buildroot.go:189] setting minikube options for container-runtime
	I0719 19:35:15.233500   68019 config.go:182] Loaded profile config "no-preload-971041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 19:35:15.233593   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:15.236166   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.236575   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.236621   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.236816   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:15.237047   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.237204   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.237393   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:15.237538   68019 main.go:141] libmachine: Using SSH client type: native
	I0719 19:35:15.237727   68019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.119 22 <nil> <nil>}
	I0719 19:35:15.237752   68019 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 19:35:15.489927   68019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 19:35:15.489955   68019 machine.go:97] duration metric: took 860.992755ms to provisionDockerMachine
	I0719 19:35:15.489968   68019 start.go:293] postStartSetup for "no-preload-971041" (driver="kvm2")
	I0719 19:35:15.489983   68019 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 19:35:15.490011   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:35:15.490360   68019 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 19:35:15.490390   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:15.493498   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.493895   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.493921   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.494033   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:15.494190   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.494355   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:15.494522   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:35:15.575024   68019 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 19:35:15.579078   68019 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 19:35:15.579102   68019 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 19:35:15.579180   68019 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 19:35:15.579269   68019 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 19:35:15.579483   68019 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 19:35:15.589240   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:35:15.612071   68019 start.go:296] duration metric: took 122.087671ms for postStartSetup
	I0719 19:35:15.612115   68019 fix.go:56] duration metric: took 20.143554276s for fixHost
	I0719 19:35:15.612139   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:15.614679   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.614969   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.614994   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.615141   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:15.615362   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.615517   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.615645   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:15.615872   68019 main.go:141] libmachine: Using SSH client type: native
	I0719 19:35:15.616088   68019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.119 22 <nil> <nil>}
	I0719 19:35:15.616103   68019 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 19:35:15.712393   68019 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721417715.673203772
	
	I0719 19:35:15.712417   68019 fix.go:216] guest clock: 1721417715.673203772
	I0719 19:35:15.712430   68019 fix.go:229] Guest: 2024-07-19 19:35:15.673203772 +0000 UTC Remote: 2024-07-19 19:35:15.612120403 +0000 UTC m=+357.807790755 (delta=61.083369ms)
	I0719 19:35:15.712454   68019 fix.go:200] guest clock delta is within tolerance: 61.083369ms
	I0719 19:35:15.712468   68019 start.go:83] releasing machines lock for "no-preload-971041", held for 20.243943716s
	I0719 19:35:15.712492   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:35:15.712798   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetIP
	I0719 19:35:15.715651   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.716143   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.716167   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.716435   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:35:15.717039   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:35:15.717281   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:35:15.717381   68019 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 19:35:15.717428   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:15.717554   68019 ssh_runner.go:195] Run: cat /version.json
	I0719 19:35:15.717588   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:15.720120   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.720461   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.720489   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.720510   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.720731   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:15.720860   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.720891   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.720912   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.721167   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:15.721179   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:15.721365   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:35:15.721419   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.721559   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:15.721711   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:35:15.834816   68019 ssh_runner.go:195] Run: systemctl --version
	I0719 19:35:15.841452   68019 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 19:35:15.982250   68019 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 19:35:15.989060   68019 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 19:35:15.989135   68019 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 19:35:16.003901   68019 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 19:35:16.003926   68019 start.go:495] detecting cgroup driver to use...
	I0719 19:35:16.003995   68019 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 19:35:16.023680   68019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 19:35:16.038740   68019 docker.go:217] disabling cri-docker service (if available) ...
	I0719 19:35:16.038818   68019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 19:35:16.053907   68019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 19:35:16.067484   68019 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 19:35:16.186006   68019 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 19:35:16.339983   68019 docker.go:233] disabling docker service ...
	I0719 19:35:16.340044   68019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 19:35:16.353948   68019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 19:35:16.367479   68019 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 19:35:16.513192   68019 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 19:35:16.642277   68019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 19:35:16.656031   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 19:35:16.674529   68019 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0719 19:35:16.674606   68019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.684907   68019 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 19:35:16.684974   68019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.695542   68019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.706406   68019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.716477   68019 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 19:35:16.726989   68019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.736973   68019 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.753617   68019 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.764022   68019 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 19:35:16.772871   68019 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 19:35:16.772919   68019 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 19:35:16.785870   68019 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 19:35:16.794887   68019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:35:16.942160   68019 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 19:35:17.084983   68019 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 19:35:17.085048   68019 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 19:35:17.089799   68019 start.go:563] Will wait 60s for crictl version
	I0719 19:35:17.089866   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.093813   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 19:35:17.130312   68019 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 19:35:17.130390   68019 ssh_runner.go:195] Run: crio --version
	I0719 19:35:17.157666   68019 ssh_runner.go:195] Run: crio --version
	I0719 19:35:17.186672   68019 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0719 19:35:17.188084   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetIP
	I0719 19:35:17.190975   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:17.191340   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:17.191369   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:17.191561   68019 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0719 19:35:17.195580   68019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:35:17.207427   68019 kubeadm.go:883] updating cluster {Name:no-preload-971041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-971041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.119 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 19:35:17.207577   68019 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0719 19:35:17.207631   68019 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:35:17.243262   68019 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0719 19:35:17.243288   68019 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 19:35:17.243386   68019 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:17.243412   68019 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 19:35:17.243417   68019 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0719 19:35:17.243423   68019 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 19:35:17.243448   68019 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 19:35:17.243394   68019 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 19:35:17.243395   68019 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0719 19:35:17.243493   68019 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 19:35:17.245104   68019 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0719 19:35:17.245131   68019 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0719 19:35:17.245106   68019 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 19:35:17.245116   68019 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:17.245179   68019 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 19:35:17.245186   68019 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 19:35:17.245375   68019 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 19:35:17.245546   68019 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 19:35:17.462701   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 19:35:17.480663   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0719 19:35:17.488510   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 19:35:17.491593   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 19:35:17.501505   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0719 19:35:17.502452   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 19:35:17.506436   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0719 19:35:17.522704   68019 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0719 19:35:17.522760   68019 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 19:35:17.522811   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.659364   68019 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0719 19:35:17.659399   68019 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 19:35:17.659429   68019 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0719 19:35:17.659451   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.659462   68019 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 19:35:17.659479   68019 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0719 19:35:17.659515   68019 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 19:35:17.659519   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.659550   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.659591   68019 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0719 19:35:17.659549   68019 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0719 19:35:17.659619   68019 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0719 19:35:17.659627   68019 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 19:35:17.659631   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 19:35:17.659647   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.659658   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.670058   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0719 19:35:17.670089   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0719 19:35:17.671989   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 19:35:17.672178   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 19:35:17.672292   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 19:35:17.738069   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0719 19:35:17.738169   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0719 19:35:17.789982   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0719 19:35:17.790070   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0719 19:35:17.790088   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0719 19:35:17.790163   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0719 19:35:17.790104   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0719 19:35:17.790187   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0719 19:35:17.791690   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0719 19:35:17.791731   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0719 19:35:17.791748   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0719 19:35:17.791761   68019 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0719 19:35:17.791777   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0719 19:35:17.791798   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0719 19:35:17.791810   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0719 19:35:17.799893   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0719 19:35:17.800102   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0719 19:35:17.800410   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0719 19:35:14.467021   68536 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:16.467879   68536 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:18.234926   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:20.733492   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:17.119366   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:17.619359   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:18.118510   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:18.618899   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:19.119102   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:19.619423   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:20.119089   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:20.618533   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:21.118809   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:21.618616   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:18.089182   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:19.785577   68019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.993744788s)
	I0719 19:35:19.785607   68019 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.99381007s)
	I0719 19:35:19.785615   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0719 19:35:19.785628   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0719 19:35:19.785634   68019 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0719 19:35:19.785654   68019 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.993829277s)
	I0719 19:35:19.785664   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0719 19:35:19.785684   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0719 19:35:19.785741   68019 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.696527431s)
	I0719 19:35:19.785825   68019 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0719 19:35:19.785866   68019 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:19.785910   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:19.790112   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:22.178743   68019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.393030453s)
	I0719 19:35:22.178773   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0719 19:35:22.178783   68019 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0719 19:35:22.178800   68019 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.388665972s)
	I0719 19:35:22.178830   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0719 19:35:22.178848   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0719 19:35:22.178920   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0719 19:35:22.183029   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0719 19:35:18.968128   68536 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:21.467026   68536 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:21.467056   68536 pod_ready.go:81] duration metric: took 9.005780978s for pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:21.467069   68536 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cdb66" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:21.472259   68536 pod_ready.go:92] pod "kube-proxy-cdb66" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:21.472282   68536 pod_ready.go:81] duration metric: took 5.205132ms for pod "kube-proxy-cdb66" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:21.472299   68536 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:21.476400   68536 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:21.476420   68536 pod_ready.go:81] duration metric: took 4.112093ms for pod "kube-scheduler-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:21.476431   68536 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:22.733565   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:25.232147   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:22.119363   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:22.618892   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:23.118639   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:23.619386   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:24.118543   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:24.618558   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:25.118612   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:25.619231   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:26.118601   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:26.618755   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:25.462621   68019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.283764696s)
	I0719 19:35:25.462658   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0719 19:35:25.462676   68019 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0719 19:35:25.462728   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0719 19:35:26.824398   68019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.361644989s)
	I0719 19:35:26.824432   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0719 19:35:26.824443   68019 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0719 19:35:26.824492   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0719 19:35:23.484653   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:25.984229   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:27.233184   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:29.732740   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:27.118683   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:27.618797   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:28.118832   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:28.618508   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:29.118591   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:29.618542   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:30.118979   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:30.619506   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:31.118721   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:31.619394   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:28.791678   68019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.967160098s)
	I0719 19:35:28.791704   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0719 19:35:28.791716   68019 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0719 19:35:28.791769   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0719 19:35:30.745676   68019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.95388111s)
	I0719 19:35:30.745709   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0719 19:35:30.745721   68019 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0719 19:35:30.745769   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0719 19:35:31.391036   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0719 19:35:31.391080   68019 cache_images.go:123] Successfully loaded all cached images
	I0719 19:35:31.391085   68019 cache_images.go:92] duration metric: took 14.147757536s to LoadCachedImages
	I0719 19:35:31.391098   68019 kubeadm.go:934] updating node { 192.168.50.119 8443 v1.31.0-beta.0 crio true true} ...
	I0719 19:35:31.391207   68019 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-971041 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.119
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-971041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 19:35:31.391282   68019 ssh_runner.go:195] Run: crio config
	I0719 19:35:31.434562   68019 cni.go:84] Creating CNI manager for ""
	I0719 19:35:31.434584   68019 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:35:31.434594   68019 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 19:35:31.434617   68019 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.119 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-971041 NodeName:no-preload-971041 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.119"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.119 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 19:35:31.434745   68019 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.119
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-971041"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.119
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.119"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 19:35:31.434803   68019 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0719 19:35:31.446259   68019 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 19:35:31.446337   68019 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 19:35:31.456298   68019 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0719 19:35:31.473270   68019 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0719 19:35:31.490353   68019 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0719 19:35:31.506996   68019 ssh_runner.go:195] Run: grep 192.168.50.119	control-plane.minikube.internal$ /etc/hosts
	I0719 19:35:31.510538   68019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.119	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:35:31.522076   68019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:35:31.638008   68019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:35:31.654421   68019 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041 for IP: 192.168.50.119
	I0719 19:35:31.654443   68019 certs.go:194] generating shared ca certs ...
	I0719 19:35:31.654458   68019 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:35:31.654637   68019 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 19:35:31.654703   68019 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 19:35:31.654732   68019 certs.go:256] generating profile certs ...
	I0719 19:35:31.654844   68019 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/client.key
	I0719 19:35:31.654929   68019 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/apiserver.key.77bb0cd9
	I0719 19:35:31.654978   68019 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/proxy-client.key
	I0719 19:35:31.655112   68019 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 19:35:31.655148   68019 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 19:35:31.655161   68019 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 19:35:31.655204   68019 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 19:35:31.655234   68019 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 19:35:31.655274   68019 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 19:35:31.655325   68019 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:35:31.656172   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 19:35:31.683885   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 19:35:31.714681   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 19:35:31.742470   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 19:35:31.770348   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0719 19:35:31.804982   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 19:35:31.830750   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 19:35:31.857826   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 19:35:31.880913   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 19:35:31.903475   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 19:35:31.925060   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 19:35:31.948414   68019 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 19:35:31.964203   68019 ssh_runner.go:195] Run: openssl version
	I0719 19:35:31.969774   68019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 19:35:31.981478   68019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 19:35:31.986937   68019 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 19:35:31.987002   68019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 19:35:31.993471   68019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 19:35:32.004689   68019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 19:35:32.015199   68019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 19:35:32.019475   68019 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 19:35:32.019514   68019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 19:35:32.024962   68019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 19:35:32.034782   68019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 19:35:32.044867   68019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:35:32.048801   68019 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:35:32.048849   68019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:35:32.054461   68019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 19:35:32.064884   68019 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 19:35:32.069053   68019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 19:35:32.074547   68019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 19:35:32.080411   68019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 19:35:32.085765   68019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 19:35:32.090933   68019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 19:35:32.096183   68019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 19:35:32.101377   68019 kubeadm.go:392] StartCluster: {Name:no-preload-971041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-971041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.119 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:35:32.101485   68019 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 19:35:32.101523   68019 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:35:32.139740   68019 cri.go:89] found id: ""
	I0719 19:35:32.139796   68019 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 19:35:32.149576   68019 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 19:35:32.149595   68019 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 19:35:32.149640   68019 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 19:35:32.159486   68019 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 19:35:32.160857   68019 kubeconfig.go:125] found "no-preload-971041" server: "https://192.168.50.119:8443"
	I0719 19:35:32.163357   68019 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 19:35:32.173671   68019 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.119
	I0719 19:35:32.173710   68019 kubeadm.go:1160] stopping kube-system containers ...
	I0719 19:35:32.173723   68019 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 19:35:32.173767   68019 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:35:32.207095   68019 cri.go:89] found id: ""
	I0719 19:35:32.207172   68019 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 19:35:32.224068   68019 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:35:32.233552   68019 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:35:32.233571   68019 kubeadm.go:157] found existing configuration files:
	
	I0719 19:35:32.233623   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:35:32.241920   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:35:32.241971   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:35:32.250594   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:35:32.258910   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:35:32.258961   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:35:32.273812   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:35:32.282220   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:35:32.282277   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:35:32.290961   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:35:32.299396   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:35:32.299498   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:35:32.308343   68019 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:35:32.317592   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:32.421587   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:28.482817   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:30.982904   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:32.983517   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:32.231730   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:34.232127   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:32.118495   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:32.618922   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:33.119055   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:33.619189   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:34.118644   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:34.618987   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:35.119225   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:35.619447   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:36.118778   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:36.619327   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:33.447745   68019 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.026128232s)
	I0719 19:35:33.447770   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:33.662089   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:33.725683   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:33.792637   68019 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:35:33.792728   68019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:34.292798   68019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:34.793080   68019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:34.807168   68019 api_server.go:72] duration metric: took 1.014530984s to wait for apiserver process to appear ...
	I0719 19:35:34.807196   68019 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:35:34.807224   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:35:34.807771   68019 api_server.go:269] stopped: https://192.168.50.119:8443/healthz: Get "https://192.168.50.119:8443/healthz": dial tcp 192.168.50.119:8443: connect: connection refused
	I0719 19:35:35.308340   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:35:37.476017   68019 api_server.go:279] https://192.168.50.119:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:35:37.476054   68019 api_server.go:103] status: https://192.168.50.119:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:35:37.476070   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:35:37.518367   68019 api_server.go:279] https://192.168.50.119:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:35:37.518394   68019 api_server.go:103] status: https://192.168.50.119:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:35:37.807881   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:35:37.817233   68019 api_server.go:279] https://192.168.50.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:35:37.817256   68019 api_server.go:103] status: https://192.168.50.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:35:34.984368   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:37.482412   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:38.307564   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:35:38.312884   68019 api_server.go:279] https://192.168.50.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:35:38.312920   68019 api_server.go:103] status: https://192.168.50.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:35:38.807444   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:35:38.811634   68019 api_server.go:279] https://192.168.50.119:8443/healthz returned 200:
	ok
	I0719 19:35:38.818446   68019 api_server.go:141] control plane version: v1.31.0-beta.0
	I0719 19:35:38.818473   68019 api_server.go:131] duration metric: took 4.011270764s to wait for apiserver health ...
	I0719 19:35:38.818484   68019 cni.go:84] Creating CNI manager for ""
	I0719 19:35:38.818493   68019 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:35:38.820420   68019 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 19:35:36.733194   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:38.734341   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:41.232296   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:37.118504   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:37.619038   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:38.118793   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:38.619025   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:39.118668   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:39.619156   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:40.118595   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:40.618638   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:41.119418   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:41.618806   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:38.821795   68019 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 19:35:38.833825   68019 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 19:35:38.855799   68019 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:35:38.866571   68019 system_pods.go:59] 8 kube-system pods found
	I0719 19:35:38.866614   68019 system_pods.go:61] "coredns-5cfdc65f69-dzgdb" [1aca7daa-f2b9-41f9-ab4f-f6dcdf5a5655] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 19:35:38.866623   68019 system_pods.go:61] "etcd-no-preload-971041" [a2001218-3485-47a4-b7ba-bc5c41227b84] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 19:35:38.866632   68019 system_pods.go:61] "kube-apiserver-no-preload-971041" [f1a82c83-f8fc-44da-ba9c-558422a050cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 19:35:38.866641   68019 system_pods.go:61] "kube-controller-manager-no-preload-971041" [d760c600-3d53-436f-80b1-43f97c8a1d97] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 19:35:38.866649   68019 system_pods.go:61] "kube-proxy-xnwc9" [f709e007-bc70-4aed-a66c-c7a8218df4c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0719 19:35:38.866661   68019 system_pods.go:61] "kube-scheduler-no-preload-971041" [0751f461-931d-4ae9-bba2-393487903f77] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0719 19:35:38.866671   68019 system_pods.go:61] "metrics-server-78fcd8795b-j78vm" [dd7ef76a-38b2-4660-88ce-a4dbe6bdd43e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:35:38.866683   68019 system_pods.go:61] "storage-provisioner" [68333fba-f647-4ddc-81a1-c514426efb1c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 19:35:38.866690   68019 system_pods.go:74] duration metric: took 10.871899ms to wait for pod list to return data ...
	I0719 19:35:38.866700   68019 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:35:38.871059   68019 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:35:38.871085   68019 node_conditions.go:123] node cpu capacity is 2
	I0719 19:35:38.871096   68019 node_conditions.go:105] duration metric: took 4.388431ms to run NodePressure ...
	I0719 19:35:38.871117   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:39.202189   68019 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 19:35:39.208833   68019 kubeadm.go:739] kubelet initialised
	I0719 19:35:39.208852   68019 kubeadm.go:740] duration metric: took 6.63993ms waiting for restarted kubelet to initialise ...
	I0719 19:35:39.208860   68019 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:35:39.213105   68019 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-dzgdb" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:41.221061   68019 pod_ready.go:102] pod "coredns-5cfdc65f69-dzgdb" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:39.482629   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:41.484400   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:43.732393   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:46.230839   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:42.118791   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:42.619397   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:43.119214   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:43.619520   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:44.118776   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:44.618539   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:45.118556   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:45.619338   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:46.119336   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:46.618522   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:43.221724   68019 pod_ready.go:102] pod "coredns-5cfdc65f69-dzgdb" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:44.720005   68019 pod_ready.go:92] pod "coredns-5cfdc65f69-dzgdb" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:44.720027   68019 pod_ready.go:81] duration metric: took 5.506899416s for pod "coredns-5cfdc65f69-dzgdb" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:44.720036   68019 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:46.725895   68019 pod_ready.go:102] pod "etcd-no-preload-971041" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:43.983502   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:46.482945   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:48.232409   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:50.232711   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:47.119180   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:47.618740   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:48.118755   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:48.618715   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:49.118497   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:49.619062   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:50.118758   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:50.618820   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:51.119238   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:51.618777   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:48.726634   68019 pod_ready.go:102] pod "etcd-no-preload-971041" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:50.726838   68019 pod_ready.go:102] pod "etcd-no-preload-971041" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:51.727136   68019 pod_ready.go:92] pod "etcd-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:51.727157   68019 pod_ready.go:81] duration metric: took 7.007114402s for pod "etcd-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.727166   68019 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.731799   68019 pod_ready.go:92] pod "kube-apiserver-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:51.731826   68019 pod_ready.go:81] duration metric: took 4.653065ms for pod "kube-apiserver-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.731868   68019 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.736589   68019 pod_ready.go:92] pod "kube-controller-manager-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:51.736609   68019 pod_ready.go:81] duration metric: took 4.730944ms for pod "kube-controller-manager-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.736622   68019 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xnwc9" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.740944   68019 pod_ready.go:92] pod "kube-proxy-xnwc9" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:51.740969   68019 pod_ready.go:81] duration metric: took 4.339031ms for pod "kube-proxy-xnwc9" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.740990   68019 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.744978   68019 pod_ready.go:92] pod "kube-scheduler-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:51.744996   68019 pod_ready.go:81] duration metric: took 3.998956ms for pod "kube-scheduler-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.745004   68019 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:48.983216   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:50.983267   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:52.234034   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:54.234538   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:52.119537   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:52.618848   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:53.118706   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:53.618985   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:54.119456   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:54.618507   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:55.118511   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:55.618721   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:56.118586   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:56.618540   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:53.751346   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:56.252649   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:53.482767   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:55.982603   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:56.732054   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:59.231910   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:57.118852   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:57.619296   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:58.118571   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:58.619133   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:59.119529   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:59.618552   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:00.118563   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:00.618818   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:01.118545   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:01.619292   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:58.751344   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:00.751687   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:58.482346   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:00.983025   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:02.983499   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:01.731466   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:03.732078   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:06.232464   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:02.118552   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:02.618759   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:03.119280   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:03.619453   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:04.119391   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:04.619016   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:05.118566   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:05.118653   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:05.157126   68995 cri.go:89] found id: ""
	I0719 19:36:05.157154   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.157163   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:05.157169   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:05.157218   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:05.190115   68995 cri.go:89] found id: ""
	I0719 19:36:05.190143   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.190153   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:05.190161   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:05.190218   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:05.222515   68995 cri.go:89] found id: ""
	I0719 19:36:05.222542   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.222550   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:05.222559   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:05.222609   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:05.264077   68995 cri.go:89] found id: ""
	I0719 19:36:05.264096   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.264103   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:05.264108   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:05.264180   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:05.305227   68995 cri.go:89] found id: ""
	I0719 19:36:05.305260   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.305271   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:05.305278   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:05.305365   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:05.346841   68995 cri.go:89] found id: ""
	I0719 19:36:05.346870   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.346883   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:05.346893   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:05.346948   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:05.386787   68995 cri.go:89] found id: ""
	I0719 19:36:05.386816   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.386826   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:05.386832   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:05.386888   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:05.425126   68995 cri.go:89] found id: ""
	I0719 19:36:05.425151   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.425161   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:05.425172   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:05.425187   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:05.476260   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:05.476291   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:05.489614   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:05.489639   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:05.616060   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:05.616082   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:05.616095   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:05.680089   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:05.680121   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:03.251437   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:05.253289   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:07.751972   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:05.483082   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:07.983293   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:08.233421   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:10.731139   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:08.219734   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:08.234371   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:08.234437   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:08.279970   68995 cri.go:89] found id: ""
	I0719 19:36:08.279998   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.280009   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:08.280016   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:08.280068   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:08.315997   68995 cri.go:89] found id: ""
	I0719 19:36:08.316026   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.316037   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:08.316044   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:08.316108   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:08.350044   68995 cri.go:89] found id: ""
	I0719 19:36:08.350073   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.350082   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:08.350089   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:08.350149   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:08.407460   68995 cri.go:89] found id: ""
	I0719 19:36:08.407482   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.407494   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:08.407501   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:08.407559   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:08.442445   68995 cri.go:89] found id: ""
	I0719 19:36:08.442476   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.442486   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:08.442493   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:08.442564   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:08.478149   68995 cri.go:89] found id: ""
	I0719 19:36:08.478178   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.478193   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:08.478200   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:08.478261   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:08.523968   68995 cri.go:89] found id: ""
	I0719 19:36:08.523996   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.524003   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:08.524009   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:08.524083   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:08.558285   68995 cri.go:89] found id: ""
	I0719 19:36:08.558312   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.558322   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:08.558332   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:08.558347   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:08.570518   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:08.570544   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:08.642837   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:08.642861   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:08.642874   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:08.716865   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:08.716905   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:08.760499   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:08.760530   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:11.312231   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:11.326565   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:11.326640   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:11.359787   68995 cri.go:89] found id: ""
	I0719 19:36:11.359813   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.359823   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:11.359829   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:11.359905   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:11.392541   68995 cri.go:89] found id: ""
	I0719 19:36:11.392565   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.392573   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:11.392578   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:11.392626   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:11.426313   68995 cri.go:89] found id: ""
	I0719 19:36:11.426340   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.426357   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:11.426363   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:11.426426   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:11.463138   68995 cri.go:89] found id: ""
	I0719 19:36:11.463164   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.463174   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:11.463181   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:11.463242   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:11.496506   68995 cri.go:89] found id: ""
	I0719 19:36:11.496531   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.496542   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:11.496549   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:11.496610   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:11.533917   68995 cri.go:89] found id: ""
	I0719 19:36:11.533947   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.533960   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:11.533969   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:11.534048   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:11.567245   68995 cri.go:89] found id: ""
	I0719 19:36:11.567268   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.567277   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:11.567283   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:11.567332   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:11.609340   68995 cri.go:89] found id: ""
	I0719 19:36:11.609365   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.609376   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:11.609386   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:11.609400   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:11.659918   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:11.659953   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:11.673338   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:11.673371   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:11.745829   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:11.745854   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:11.745868   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:11.816860   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:11.816897   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:10.251374   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:12.252231   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:09.983472   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:12.482718   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:12.732496   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:15.231796   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:14.357653   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:14.371820   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:14.371900   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:14.408447   68995 cri.go:89] found id: ""
	I0719 19:36:14.408478   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.408488   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:14.408512   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:14.408583   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:14.443253   68995 cri.go:89] found id: ""
	I0719 19:36:14.443283   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.443319   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:14.443329   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:14.443403   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:14.476705   68995 cri.go:89] found id: ""
	I0719 19:36:14.476730   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.476739   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:14.476746   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:14.476806   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:14.508251   68995 cri.go:89] found id: ""
	I0719 19:36:14.508276   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.508286   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:14.508294   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:14.508365   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:14.540822   68995 cri.go:89] found id: ""
	I0719 19:36:14.540850   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.540860   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:14.540868   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:14.540930   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:14.574264   68995 cri.go:89] found id: ""
	I0719 19:36:14.574294   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.574320   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:14.574329   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:14.574409   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:14.606520   68995 cri.go:89] found id: ""
	I0719 19:36:14.606553   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.606565   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:14.606573   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:14.606646   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:14.640693   68995 cri.go:89] found id: ""
	I0719 19:36:14.640722   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.640731   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:14.640743   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:14.640757   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:14.697534   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:14.697575   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:14.710323   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:14.710365   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:14.781478   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:14.781501   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:14.781516   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:14.887498   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:14.887535   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:14.750464   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:16.751420   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:14.483133   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:16.982943   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:17.732205   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:19.733051   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:17.427726   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:17.441950   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:17.442041   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:17.475370   68995 cri.go:89] found id: ""
	I0719 19:36:17.475398   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.475408   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:17.475415   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:17.475474   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:17.508258   68995 cri.go:89] found id: ""
	I0719 19:36:17.508289   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.508307   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:17.508315   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:17.508380   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:17.542625   68995 cri.go:89] found id: ""
	I0719 19:36:17.542656   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.542668   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:17.542676   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:17.542750   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:17.583168   68995 cri.go:89] found id: ""
	I0719 19:36:17.583199   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.583210   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:17.583218   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:17.583280   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:17.619339   68995 cri.go:89] found id: ""
	I0719 19:36:17.619369   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.619380   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:17.619386   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:17.619438   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:17.653194   68995 cri.go:89] found id: ""
	I0719 19:36:17.653219   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.653227   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:17.653233   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:17.653289   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:17.689086   68995 cri.go:89] found id: ""
	I0719 19:36:17.689115   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.689123   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:17.689129   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:17.689245   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:17.723079   68995 cri.go:89] found id: ""
	I0719 19:36:17.723108   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.723115   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:17.723124   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:17.723148   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:17.736805   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:17.736829   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:17.808197   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:17.808219   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:17.808251   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:17.893127   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:17.893173   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:17.936617   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:17.936653   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:20.489347   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:20.502806   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:20.502884   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:20.543791   68995 cri.go:89] found id: ""
	I0719 19:36:20.543822   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.543846   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:20.543855   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:20.543924   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:20.582638   68995 cri.go:89] found id: ""
	I0719 19:36:20.582666   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.582678   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:20.582685   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:20.582742   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:20.632150   68995 cri.go:89] found id: ""
	I0719 19:36:20.632179   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.632192   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:20.632200   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:20.632265   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:20.665934   68995 cri.go:89] found id: ""
	I0719 19:36:20.665963   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.665974   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:20.665981   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:20.666040   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:20.703637   68995 cri.go:89] found id: ""
	I0719 19:36:20.703668   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.703679   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:20.703686   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:20.703755   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:20.736467   68995 cri.go:89] found id: ""
	I0719 19:36:20.736496   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.736506   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:20.736514   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:20.736582   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:20.771313   68995 cri.go:89] found id: ""
	I0719 19:36:20.771337   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.771345   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:20.771351   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:20.771408   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:20.804915   68995 cri.go:89] found id: ""
	I0719 19:36:20.804938   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.804944   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:20.804953   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:20.804964   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:20.880189   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:20.880229   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:20.919653   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:20.919682   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:20.968588   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:20.968626   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:20.983781   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:20.983810   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:21.050261   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:18.752612   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:21.251202   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:19.483771   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:21.983390   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:22.232680   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:24.731208   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:23.550679   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:23.563483   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:23.563541   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:23.603919   68995 cri.go:89] found id: ""
	I0719 19:36:23.603950   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.603961   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:23.603969   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:23.604034   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:23.640354   68995 cri.go:89] found id: ""
	I0719 19:36:23.640377   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.640384   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:23.640390   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:23.640441   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:23.672609   68995 cri.go:89] found id: ""
	I0719 19:36:23.672644   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.672661   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:23.672668   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:23.672732   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:23.709547   68995 cri.go:89] found id: ""
	I0719 19:36:23.709577   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.709591   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:23.709598   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:23.709658   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:23.742474   68995 cri.go:89] found id: ""
	I0719 19:36:23.742498   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.742508   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:23.742514   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:23.742578   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:23.779339   68995 cri.go:89] found id: ""
	I0719 19:36:23.779372   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.779385   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:23.779393   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:23.779462   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:23.811984   68995 cri.go:89] found id: ""
	I0719 19:36:23.812010   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.812022   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:23.812031   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:23.812097   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:23.844692   68995 cri.go:89] found id: ""
	I0719 19:36:23.844716   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.844724   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:23.844733   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:23.844746   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:23.899688   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:23.899721   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:23.915771   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:23.915799   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:23.986509   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:23.986529   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:23.986543   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:24.065621   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:24.065653   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:26.602809   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:26.615541   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:26.615616   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:26.647648   68995 cri.go:89] found id: ""
	I0719 19:36:26.647671   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.647680   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:26.647685   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:26.647742   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:26.682383   68995 cri.go:89] found id: ""
	I0719 19:36:26.682409   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.682417   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:26.682423   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:26.682468   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:26.715374   68995 cri.go:89] found id: ""
	I0719 19:36:26.715398   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.715405   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:26.715411   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:26.715482   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:26.748636   68995 cri.go:89] found id: ""
	I0719 19:36:26.748663   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.748673   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:26.748680   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:26.748742   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:26.783186   68995 cri.go:89] found id: ""
	I0719 19:36:26.783218   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.783229   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:26.783236   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:26.783299   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:26.813344   68995 cri.go:89] found id: ""
	I0719 19:36:26.813370   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.813380   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:26.813386   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:26.813434   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:26.845054   68995 cri.go:89] found id: ""
	I0719 19:36:26.845083   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.845093   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:26.845100   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:26.845166   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:26.878589   68995 cri.go:89] found id: ""
	I0719 19:36:26.878614   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.878621   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:26.878629   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:26.878640   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:26.931251   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:26.931284   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:26.944193   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:26.944230   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 19:36:23.252710   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:25.257011   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:27.751677   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:24.482571   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:26.982654   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:26.732883   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:29.233481   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	W0719 19:36:27.011803   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:27.011826   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:27.011846   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:27.085331   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:27.085364   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:29.625633   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:29.639941   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:29.640002   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:29.673642   68995 cri.go:89] found id: ""
	I0719 19:36:29.673669   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.673678   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:29.673683   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:29.673744   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:29.710482   68995 cri.go:89] found id: ""
	I0719 19:36:29.710507   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.710515   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:29.710521   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:29.710571   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:29.746719   68995 cri.go:89] found id: ""
	I0719 19:36:29.746744   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.746754   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:29.746761   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:29.746823   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:29.783230   68995 cri.go:89] found id: ""
	I0719 19:36:29.783256   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.783264   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:29.783269   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:29.783318   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:29.817944   68995 cri.go:89] found id: ""
	I0719 19:36:29.817971   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.817980   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:29.817986   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:29.818049   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:29.851277   68995 cri.go:89] found id: ""
	I0719 19:36:29.851305   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.851315   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:29.851328   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:29.851387   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:29.884476   68995 cri.go:89] found id: ""
	I0719 19:36:29.884502   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.884510   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:29.884516   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:29.884566   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:29.916564   68995 cri.go:89] found id: ""
	I0719 19:36:29.916592   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.916600   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:29.916608   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:29.916620   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:29.967970   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:29.968009   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:29.981767   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:29.981792   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:30.046914   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:30.046939   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:30.046953   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:30.125610   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:30.125644   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:29.753033   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:32.251239   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:29.483807   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:31.982677   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:31.732069   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:34.231495   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:36.232914   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:32.661765   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:32.676274   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:32.676354   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:32.712439   68995 cri.go:89] found id: ""
	I0719 19:36:32.712460   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.712468   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:32.712474   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:32.712534   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:32.745933   68995 cri.go:89] found id: ""
	I0719 19:36:32.745956   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.745965   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:32.745972   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:32.746035   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:32.778900   68995 cri.go:89] found id: ""
	I0719 19:36:32.778920   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.778927   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:32.778932   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:32.778988   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:32.809927   68995 cri.go:89] found id: ""
	I0719 19:36:32.809956   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.809966   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:32.809979   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:32.810038   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:32.840388   68995 cri.go:89] found id: ""
	I0719 19:36:32.840409   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.840417   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:32.840422   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:32.840472   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:32.873390   68995 cri.go:89] found id: ""
	I0719 19:36:32.873419   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.873428   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:32.873436   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:32.873499   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:32.909278   68995 cri.go:89] found id: ""
	I0719 19:36:32.909304   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.909311   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:32.909317   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:32.909374   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:32.941310   68995 cri.go:89] found id: ""
	I0719 19:36:32.941339   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.941346   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:32.941358   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:32.941370   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:32.992848   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:32.992875   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:33.006272   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:33.006305   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:33.068323   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:33.068358   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:33.068372   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:33.148030   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:33.148065   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:35.684957   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:35.697753   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:35.697823   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:35.733470   68995 cri.go:89] found id: ""
	I0719 19:36:35.733492   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.733500   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:35.733508   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:35.733565   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:35.766009   68995 cri.go:89] found id: ""
	I0719 19:36:35.766029   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.766037   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:35.766047   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:35.766104   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:35.799205   68995 cri.go:89] found id: ""
	I0719 19:36:35.799228   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.799238   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:35.799245   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:35.799303   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:35.834102   68995 cri.go:89] found id: ""
	I0719 19:36:35.834129   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.834139   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:35.834147   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:35.834217   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:35.867545   68995 cri.go:89] found id: ""
	I0719 19:36:35.867578   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.867588   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:35.867597   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:35.867653   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:35.902804   68995 cri.go:89] found id: ""
	I0719 19:36:35.902833   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.902856   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:35.902862   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:35.902917   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:35.935798   68995 cri.go:89] found id: ""
	I0719 19:36:35.935826   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.935851   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:35.935858   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:35.935918   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:35.968992   68995 cri.go:89] found id: ""
	I0719 19:36:35.969026   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.969037   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:35.969049   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:35.969068   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:36.054017   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:36.054050   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:36.094850   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:36.094875   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:36.145996   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:36.146030   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:36.159520   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:36.159549   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:36.227943   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:34.751556   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:36.754350   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:34.483433   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:36.982914   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:38.732952   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:41.231720   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:38.728958   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:38.742662   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:38.742727   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:38.779815   68995 cri.go:89] found id: ""
	I0719 19:36:38.779856   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.779867   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:38.779874   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:38.779938   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:38.814393   68995 cri.go:89] found id: ""
	I0719 19:36:38.814418   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.814428   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:38.814435   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:38.814496   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:38.848480   68995 cri.go:89] found id: ""
	I0719 19:36:38.848501   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.848508   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:38.848513   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:38.848565   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:38.883737   68995 cri.go:89] found id: ""
	I0719 19:36:38.883787   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.883796   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:38.883802   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:38.883887   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:38.921331   68995 cri.go:89] found id: ""
	I0719 19:36:38.921365   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.921376   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:38.921384   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:38.921432   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:38.957590   68995 cri.go:89] found id: ""
	I0719 19:36:38.957617   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.957625   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:38.957630   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:38.957689   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:38.992056   68995 cri.go:89] found id: ""
	I0719 19:36:38.992081   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.992091   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:38.992099   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:38.992162   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:39.026137   68995 cri.go:89] found id: ""
	I0719 19:36:39.026165   68995 logs.go:276] 0 containers: []
	W0719 19:36:39.026173   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:39.026181   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:39.026191   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:39.104975   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:39.105016   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:39.149479   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:39.149511   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:39.200003   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:39.200033   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:39.213690   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:39.213720   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:39.283772   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:41.784814   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:41.798307   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:41.798388   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:41.835130   68995 cri.go:89] found id: ""
	I0719 19:36:41.835160   68995 logs.go:276] 0 containers: []
	W0719 19:36:41.835181   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:41.835190   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:41.835248   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:41.871655   68995 cri.go:89] found id: ""
	I0719 19:36:41.871686   68995 logs.go:276] 0 containers: []
	W0719 19:36:41.871696   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:41.871702   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:41.871748   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:41.905291   68995 cri.go:89] found id: ""
	I0719 19:36:41.905321   68995 logs.go:276] 0 containers: []
	W0719 19:36:41.905332   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:41.905338   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:41.905387   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:41.938786   68995 cri.go:89] found id: ""
	I0719 19:36:41.938820   68995 logs.go:276] 0 containers: []
	W0719 19:36:41.938831   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:41.938839   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:41.938900   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:41.973146   68995 cri.go:89] found id: ""
	I0719 19:36:41.973170   68995 logs.go:276] 0 containers: []
	W0719 19:36:41.973178   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:41.973183   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:41.973230   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:39.251794   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:41.251895   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:39.483191   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:41.983709   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:43.233443   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:45.731442   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:42.007654   68995 cri.go:89] found id: ""
	I0719 19:36:42.007692   68995 logs.go:276] 0 containers: []
	W0719 19:36:42.007703   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:42.007713   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:42.007771   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:42.043419   68995 cri.go:89] found id: ""
	I0719 19:36:42.043454   68995 logs.go:276] 0 containers: []
	W0719 19:36:42.043465   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:42.043472   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:42.043550   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:42.079058   68995 cri.go:89] found id: ""
	I0719 19:36:42.079082   68995 logs.go:276] 0 containers: []
	W0719 19:36:42.079093   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:42.079106   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:42.079119   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:42.133699   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:42.133731   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:42.149143   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:42.149178   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:42.219380   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:42.219476   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:42.219493   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:42.300052   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:42.300089   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:44.837865   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:44.851088   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:44.851160   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:44.887593   68995 cri.go:89] found id: ""
	I0719 19:36:44.887625   68995 logs.go:276] 0 containers: []
	W0719 19:36:44.887636   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:44.887644   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:44.887704   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:44.923487   68995 cri.go:89] found id: ""
	I0719 19:36:44.923518   68995 logs.go:276] 0 containers: []
	W0719 19:36:44.923528   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:44.923535   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:44.923588   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:44.958530   68995 cri.go:89] found id: ""
	I0719 19:36:44.958552   68995 logs.go:276] 0 containers: []
	W0719 19:36:44.958560   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:44.958565   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:44.958612   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:44.993148   68995 cri.go:89] found id: ""
	I0719 19:36:44.993174   68995 logs.go:276] 0 containers: []
	W0719 19:36:44.993184   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:44.993191   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:44.993261   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:45.024580   68995 cri.go:89] found id: ""
	I0719 19:36:45.024608   68995 logs.go:276] 0 containers: []
	W0719 19:36:45.024619   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:45.024627   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:45.024687   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:45.061929   68995 cri.go:89] found id: ""
	I0719 19:36:45.061954   68995 logs.go:276] 0 containers: []
	W0719 19:36:45.061965   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:45.061974   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:45.062029   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:45.093445   68995 cri.go:89] found id: ""
	I0719 19:36:45.093473   68995 logs.go:276] 0 containers: []
	W0719 19:36:45.093482   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:45.093490   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:45.093549   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:45.129804   68995 cri.go:89] found id: ""
	I0719 19:36:45.129838   68995 logs.go:276] 0 containers: []
	W0719 19:36:45.129848   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:45.129860   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:45.129874   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:45.184333   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:45.184373   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:45.197516   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:45.197545   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:45.267374   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:45.267394   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:45.267411   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:45.343359   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:45.343400   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:43.751901   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:45.752529   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:44.482460   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:46.483629   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:48.231760   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:50.232025   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:47.880247   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:47.892644   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:47.892716   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:47.924838   68995 cri.go:89] found id: ""
	I0719 19:36:47.924866   68995 logs.go:276] 0 containers: []
	W0719 19:36:47.924874   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:47.924880   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:47.924936   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:47.956920   68995 cri.go:89] found id: ""
	I0719 19:36:47.956946   68995 logs.go:276] 0 containers: []
	W0719 19:36:47.956954   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:47.956960   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:47.957021   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:47.992589   68995 cri.go:89] found id: ""
	I0719 19:36:47.992614   68995 logs.go:276] 0 containers: []
	W0719 19:36:47.992622   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:47.992627   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:47.992681   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:48.028222   68995 cri.go:89] found id: ""
	I0719 19:36:48.028251   68995 logs.go:276] 0 containers: []
	W0719 19:36:48.028262   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:48.028270   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:48.028343   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:48.062122   68995 cri.go:89] found id: ""
	I0719 19:36:48.062144   68995 logs.go:276] 0 containers: []
	W0719 19:36:48.062152   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:48.062157   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:48.062202   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:48.095189   68995 cri.go:89] found id: ""
	I0719 19:36:48.095212   68995 logs.go:276] 0 containers: []
	W0719 19:36:48.095220   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:48.095226   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:48.095273   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:48.127726   68995 cri.go:89] found id: ""
	I0719 19:36:48.127756   68995 logs.go:276] 0 containers: []
	W0719 19:36:48.127764   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:48.127769   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:48.127820   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:48.160445   68995 cri.go:89] found id: ""
	I0719 19:36:48.160473   68995 logs.go:276] 0 containers: []
	W0719 19:36:48.160481   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:48.160490   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:48.160501   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:48.214196   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:48.214228   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:48.227942   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:48.227967   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:48.302651   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:48.302671   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:48.302684   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:48.382283   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:48.382325   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:50.921030   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:50.934783   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:50.934859   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:50.970236   68995 cri.go:89] found id: ""
	I0719 19:36:50.970268   68995 logs.go:276] 0 containers: []
	W0719 19:36:50.970282   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:50.970290   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:50.970349   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:51.003636   68995 cri.go:89] found id: ""
	I0719 19:36:51.003662   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.003669   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:51.003675   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:51.003735   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:51.035390   68995 cri.go:89] found id: ""
	I0719 19:36:51.035417   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.035425   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:51.035432   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:51.035496   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:51.069118   68995 cri.go:89] found id: ""
	I0719 19:36:51.069140   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.069148   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:51.069154   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:51.069207   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:51.102433   68995 cri.go:89] found id: ""
	I0719 19:36:51.102465   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.102476   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:51.102483   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:51.102547   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:51.135711   68995 cri.go:89] found id: ""
	I0719 19:36:51.135743   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.135755   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:51.135764   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:51.135856   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:51.168041   68995 cri.go:89] found id: ""
	I0719 19:36:51.168073   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.168083   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:51.168091   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:51.168153   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:51.205637   68995 cri.go:89] found id: ""
	I0719 19:36:51.205667   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.205674   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:51.205683   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:51.205696   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:51.245168   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:51.245197   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:51.295810   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:51.295863   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:51.308791   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:51.308813   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:51.385214   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:51.385239   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:51.385254   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:48.251909   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:50.750650   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:52.751426   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:48.983311   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:50.984588   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:52.232203   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:54.731934   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:53.959019   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:53.971044   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:53.971118   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:54.004971   68995 cri.go:89] found id: ""
	I0719 19:36:54.004996   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.005007   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:54.005014   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:54.005074   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:54.035794   68995 cri.go:89] found id: ""
	I0719 19:36:54.035820   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.035828   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:54.035849   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:54.035909   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:54.067783   68995 cri.go:89] found id: ""
	I0719 19:36:54.067811   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.067821   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:54.067828   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:54.067905   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:54.107403   68995 cri.go:89] found id: ""
	I0719 19:36:54.107424   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.107432   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:54.107437   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:54.107486   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:54.144411   68995 cri.go:89] found id: ""
	I0719 19:36:54.144434   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.144443   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:54.144448   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:54.144507   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:54.180416   68995 cri.go:89] found id: ""
	I0719 19:36:54.180444   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.180453   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:54.180459   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:54.180504   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:54.216444   68995 cri.go:89] found id: ""
	I0719 19:36:54.216476   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.216484   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:54.216489   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:54.216536   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:54.252652   68995 cri.go:89] found id: ""
	I0719 19:36:54.252672   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.252679   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:54.252686   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:54.252697   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:54.265888   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:54.265920   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:54.337271   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:54.337300   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:54.337316   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:54.414882   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:54.414916   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:54.452363   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:54.452387   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:55.252673   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:57.752884   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:53.483020   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:55.982973   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:57.983073   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:56.732297   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:58.733339   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:01.233006   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:57.004491   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:57.017581   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:57.017662   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:57.051318   68995 cri.go:89] found id: ""
	I0719 19:36:57.051347   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.051355   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:57.051362   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:57.051414   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:57.091361   68995 cri.go:89] found id: ""
	I0719 19:36:57.091386   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.091394   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:57.091399   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:57.091454   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:57.125970   68995 cri.go:89] found id: ""
	I0719 19:36:57.125998   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.126009   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:57.126016   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:57.126078   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:57.162767   68995 cri.go:89] found id: ""
	I0719 19:36:57.162794   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.162804   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:57.162811   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:57.162868   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:57.199712   68995 cri.go:89] found id: ""
	I0719 19:36:57.199742   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.199763   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:57.199770   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:57.199882   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:57.234782   68995 cri.go:89] found id: ""
	I0719 19:36:57.234812   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.234823   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:57.234830   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:57.234889   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:57.269789   68995 cri.go:89] found id: ""
	I0719 19:36:57.269817   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.269827   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:57.269835   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:57.269895   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:57.303323   68995 cri.go:89] found id: ""
	I0719 19:36:57.303350   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.303361   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:57.303371   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:57.303385   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:57.353434   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:57.353469   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:57.365982   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:57.366007   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:57.433767   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:57.433787   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:57.433801   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:57.514332   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:57.514372   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:00.053832   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:00.067665   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:00.067733   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:00.102502   68995 cri.go:89] found id: ""
	I0719 19:37:00.102529   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.102539   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:00.102547   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:00.102605   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:00.155051   68995 cri.go:89] found id: ""
	I0719 19:37:00.155088   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.155101   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:00.155109   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:00.155245   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:00.189002   68995 cri.go:89] found id: ""
	I0719 19:37:00.189032   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.189042   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:00.189049   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:00.189113   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:00.222057   68995 cri.go:89] found id: ""
	I0719 19:37:00.222083   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.222091   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:00.222097   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:00.222278   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:00.256815   68995 cri.go:89] found id: ""
	I0719 19:37:00.256839   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.256847   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:00.256853   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:00.256907   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:00.288710   68995 cri.go:89] found id: ""
	I0719 19:37:00.288742   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.288750   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:00.288756   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:00.288803   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:00.325994   68995 cri.go:89] found id: ""
	I0719 19:37:00.326019   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.326028   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:00.326034   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:00.326090   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:00.360401   68995 cri.go:89] found id: ""
	I0719 19:37:00.360434   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.360441   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:00.360450   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:00.360462   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:00.373448   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:00.373479   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:00.443796   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:00.443812   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:00.443825   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:00.529738   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:00.529777   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:00.571340   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:00.571367   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:00.251762   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:02.750698   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:59.984278   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:02.482260   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:03.731540   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:05.731921   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:03.121961   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:03.134075   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:03.134472   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:03.168000   68995 cri.go:89] found id: ""
	I0719 19:37:03.168033   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.168044   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:03.168053   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:03.168118   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:03.200331   68995 cri.go:89] found id: ""
	I0719 19:37:03.200362   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.200369   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:03.200376   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:03.200437   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:03.232472   68995 cri.go:89] found id: ""
	I0719 19:37:03.232500   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.232510   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:03.232517   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:03.232567   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:03.265173   68995 cri.go:89] found id: ""
	I0719 19:37:03.265200   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.265208   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:03.265216   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:03.265268   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:03.298983   68995 cri.go:89] found id: ""
	I0719 19:37:03.299011   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.299022   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:03.299029   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:03.299081   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:03.337990   68995 cri.go:89] found id: ""
	I0719 19:37:03.338024   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.338038   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:03.338046   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:03.338111   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:03.371374   68995 cri.go:89] found id: ""
	I0719 19:37:03.371402   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.371410   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:03.371415   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:03.371472   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:03.403890   68995 cri.go:89] found id: ""
	I0719 19:37:03.403915   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.403928   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:03.403939   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:03.403954   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:03.478826   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:03.478864   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:03.519208   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:03.519231   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:03.571603   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:03.571636   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:03.585154   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:03.585178   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:03.649156   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:06.149550   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:06.162598   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:06.162676   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:06.196635   68995 cri.go:89] found id: ""
	I0719 19:37:06.196681   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.196692   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:06.196701   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:06.196764   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:06.230855   68995 cri.go:89] found id: ""
	I0719 19:37:06.230882   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.230892   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:06.230899   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:06.230959   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:06.264715   68995 cri.go:89] found id: ""
	I0719 19:37:06.264754   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.264771   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:06.264779   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:06.264842   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:06.297725   68995 cri.go:89] found id: ""
	I0719 19:37:06.297751   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.297758   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:06.297765   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:06.297810   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:06.334943   68995 cri.go:89] found id: ""
	I0719 19:37:06.334970   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.334978   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:06.334984   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:06.335038   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:06.373681   68995 cri.go:89] found id: ""
	I0719 19:37:06.373716   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.373725   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:06.373730   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:06.373781   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:06.409528   68995 cri.go:89] found id: ""
	I0719 19:37:06.409554   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.409564   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:06.409609   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:06.409682   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:06.444636   68995 cri.go:89] found id: ""
	I0719 19:37:06.444671   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.444680   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:06.444715   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:06.444734   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:06.499678   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:06.499714   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:06.514790   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:06.514821   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:06.604576   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:06.604597   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:06.604615   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:06.690956   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:06.690989   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:04.752636   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:07.251643   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:04.484460   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:06.982794   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:08.231684   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:10.233049   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:09.228994   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:09.243444   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:09.243506   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:09.277208   68995 cri.go:89] found id: ""
	I0719 19:37:09.277229   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.277237   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:09.277247   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:09.277296   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:09.309077   68995 cri.go:89] found id: ""
	I0719 19:37:09.309102   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.309110   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:09.309115   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:09.309168   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:09.341212   68995 cri.go:89] found id: ""
	I0719 19:37:09.341247   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.341258   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:09.341265   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:09.341332   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:09.376373   68995 cri.go:89] found id: ""
	I0719 19:37:09.376400   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.376410   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:09.376416   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:09.376475   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:09.409175   68995 cri.go:89] found id: ""
	I0719 19:37:09.409204   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.409215   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:09.409222   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:09.409277   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:09.441481   68995 cri.go:89] found id: ""
	I0719 19:37:09.441509   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.441518   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:09.441526   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:09.441588   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:09.472907   68995 cri.go:89] found id: ""
	I0719 19:37:09.472935   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.472946   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:09.472954   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:09.473008   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:09.507776   68995 cri.go:89] found id: ""
	I0719 19:37:09.507801   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.507810   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:09.507818   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:09.507850   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:09.559873   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:09.559910   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:09.572650   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:09.572677   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:09.644352   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:09.644375   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:09.644387   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:09.721464   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:09.721504   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:09.252776   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:11.751354   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:08.983314   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:10.983634   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:12.732203   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:15.234561   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:12.264113   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:12.276702   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:12.276758   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:12.321359   68995 cri.go:89] found id: ""
	I0719 19:37:12.321386   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.321393   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:12.321400   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:12.321471   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:12.365402   68995 cri.go:89] found id: ""
	I0719 19:37:12.365433   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.365441   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:12.365447   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:12.365505   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:12.409431   68995 cri.go:89] found id: ""
	I0719 19:37:12.409459   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.409471   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:12.409478   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:12.409526   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:12.447628   68995 cri.go:89] found id: ""
	I0719 19:37:12.447656   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.447665   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:12.447671   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:12.447720   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:12.481760   68995 cri.go:89] found id: ""
	I0719 19:37:12.481796   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.481806   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:12.481813   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:12.481875   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:12.513430   68995 cri.go:89] found id: ""
	I0719 19:37:12.513460   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.513471   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:12.513477   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:12.513525   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:12.546140   68995 cri.go:89] found id: ""
	I0719 19:37:12.546171   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.546182   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:12.546190   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:12.546258   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:12.578741   68995 cri.go:89] found id: ""
	I0719 19:37:12.578764   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.578774   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:12.578785   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:12.578800   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:12.647584   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:12.647610   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:12.647625   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:12.721989   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:12.722027   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:12.761141   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:12.761168   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:12.811283   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:12.811334   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:15.327447   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:15.339987   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:15.340053   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:15.373692   68995 cri.go:89] found id: ""
	I0719 19:37:15.373720   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.373729   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:15.373737   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:15.373794   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:15.405801   68995 cri.go:89] found id: ""
	I0719 19:37:15.405831   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.405842   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:15.405849   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:15.405915   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:15.439395   68995 cri.go:89] found id: ""
	I0719 19:37:15.439424   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.439436   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:15.439444   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:15.439513   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:15.475944   68995 cri.go:89] found id: ""
	I0719 19:37:15.475967   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.475975   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:15.475981   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:15.476029   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:15.509593   68995 cri.go:89] found id: ""
	I0719 19:37:15.509613   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.509627   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:15.509634   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:15.509695   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:15.548840   68995 cri.go:89] found id: ""
	I0719 19:37:15.548871   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.548881   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:15.548887   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:15.548948   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:15.581096   68995 cri.go:89] found id: ""
	I0719 19:37:15.581139   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.581155   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:15.581163   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:15.581223   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:15.617428   68995 cri.go:89] found id: ""
	I0719 19:37:15.617458   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.617469   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:15.617479   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:15.617495   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:15.688196   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:15.688223   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:15.688239   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:15.770179   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:15.770216   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:15.807899   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:15.807924   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:15.858152   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:15.858186   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:13.751672   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:16.251465   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:13.482766   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:15.485709   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:17.982675   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:17.731547   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:19.732093   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:18.370857   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:18.383710   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:18.383781   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:18.417000   68995 cri.go:89] found id: ""
	I0719 19:37:18.417032   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.417044   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:18.417051   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:18.417149   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:18.455852   68995 cri.go:89] found id: ""
	I0719 19:37:18.455884   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.455900   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:18.455907   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:18.455970   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:18.491458   68995 cri.go:89] found id: ""
	I0719 19:37:18.491490   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.491501   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:18.491508   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:18.491573   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:18.525125   68995 cri.go:89] found id: ""
	I0719 19:37:18.525151   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.525158   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:18.525163   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:18.525210   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:18.560527   68995 cri.go:89] found id: ""
	I0719 19:37:18.560555   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.560566   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:18.560574   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:18.560624   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:18.595328   68995 cri.go:89] found id: ""
	I0719 19:37:18.595365   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.595378   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:18.595388   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:18.595445   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:18.627669   68995 cri.go:89] found id: ""
	I0719 19:37:18.627697   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.627705   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:18.627711   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:18.627765   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:18.660140   68995 cri.go:89] found id: ""
	I0719 19:37:18.660168   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.660177   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:18.660191   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:18.660206   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:18.733599   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:18.733628   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:18.774893   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:18.774920   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:18.827635   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:18.827672   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:18.841519   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:18.841545   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:18.904744   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:21.405683   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:21.420145   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:21.420224   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:21.456434   68995 cri.go:89] found id: ""
	I0719 19:37:21.456465   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.456477   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:21.456485   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:21.456543   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:21.489839   68995 cri.go:89] found id: ""
	I0719 19:37:21.489867   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.489874   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:21.489893   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:21.489945   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:21.521492   68995 cri.go:89] found id: ""
	I0719 19:37:21.521523   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.521533   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:21.521540   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:21.521602   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:21.553974   68995 cri.go:89] found id: ""
	I0719 19:37:21.553998   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.554008   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:21.554015   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:21.554075   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:21.589000   68995 cri.go:89] found id: ""
	I0719 19:37:21.589030   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.589042   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:21.589050   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:21.589119   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:21.622617   68995 cri.go:89] found id: ""
	I0719 19:37:21.622640   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.622648   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:21.622653   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:21.622702   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:21.654149   68995 cri.go:89] found id: ""
	I0719 19:37:21.654171   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.654178   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:21.654183   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:21.654241   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:21.687587   68995 cri.go:89] found id: ""
	I0719 19:37:21.687616   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.687628   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:21.687640   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:21.687656   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:21.699883   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:21.699910   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:21.769804   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:21.769825   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:21.769839   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:21.857167   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:21.857205   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:21.896025   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:21.896050   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:18.251893   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:20.751093   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:19.989216   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:22.483346   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:22.231129   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:24.231620   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:26.232522   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:24.444695   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:24.457239   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:24.457310   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:24.491968   68995 cri.go:89] found id: ""
	I0719 19:37:24.491990   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.491999   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:24.492003   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:24.492049   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:24.526039   68995 cri.go:89] found id: ""
	I0719 19:37:24.526061   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.526069   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:24.526074   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:24.526120   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:24.560153   68995 cri.go:89] found id: ""
	I0719 19:37:24.560184   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.560194   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:24.560201   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:24.560254   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:24.594331   68995 cri.go:89] found id: ""
	I0719 19:37:24.594359   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.594369   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:24.594375   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:24.594442   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:24.632389   68995 cri.go:89] found id: ""
	I0719 19:37:24.632416   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.632425   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:24.632430   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:24.632481   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:24.664448   68995 cri.go:89] found id: ""
	I0719 19:37:24.664473   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.664481   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:24.664487   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:24.664547   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:24.699510   68995 cri.go:89] found id: ""
	I0719 19:37:24.699533   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.699542   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:24.699547   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:24.699599   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:24.732624   68995 cri.go:89] found id: ""
	I0719 19:37:24.732646   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.732655   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:24.732665   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:24.732688   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:24.800136   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:24.800161   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:24.800176   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:24.883652   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:24.883696   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:24.922027   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:24.922052   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:24.975191   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:24.975235   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:23.251310   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:25.251598   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:27.751985   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:24.484050   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:26.982359   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:28.237361   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:30.734572   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:27.489307   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:27.503441   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:27.503500   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:27.536099   68995 cri.go:89] found id: ""
	I0719 19:37:27.536126   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.536134   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:27.536140   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:27.536187   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:27.567032   68995 cri.go:89] found id: ""
	I0719 19:37:27.567060   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.567068   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:27.567073   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:27.567133   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:27.599396   68995 cri.go:89] found id: ""
	I0719 19:37:27.599425   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.599436   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:27.599444   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:27.599510   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:27.634718   68995 cri.go:89] found id: ""
	I0719 19:37:27.634746   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.634755   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:27.634760   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:27.634809   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:27.672731   68995 cri.go:89] found id: ""
	I0719 19:37:27.672766   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.672778   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:27.672787   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:27.672852   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:27.706705   68995 cri.go:89] found id: ""
	I0719 19:37:27.706734   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.706743   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:27.706751   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:27.706814   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:27.740520   68995 cri.go:89] found id: ""
	I0719 19:37:27.740548   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.740556   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:27.740561   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:27.740614   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:27.772629   68995 cri.go:89] found id: ""
	I0719 19:37:27.772653   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.772660   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:27.772668   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:27.772680   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:27.826404   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:27.826432   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:27.839299   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:27.839331   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:27.903888   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:27.903908   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:27.903920   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:27.981384   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:27.981418   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:30.519503   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:30.532781   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:30.532867   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:30.566867   68995 cri.go:89] found id: ""
	I0719 19:37:30.566900   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.566911   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:30.566918   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:30.566979   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:30.599130   68995 cri.go:89] found id: ""
	I0719 19:37:30.599160   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.599171   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:30.599178   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:30.599241   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:30.632636   68995 cri.go:89] found id: ""
	I0719 19:37:30.632672   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.632685   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:30.632694   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:30.632767   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:30.665793   68995 cri.go:89] found id: ""
	I0719 19:37:30.665822   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.665837   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:30.665845   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:30.665908   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:30.699002   68995 cri.go:89] found id: ""
	I0719 19:37:30.699027   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.699034   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:30.699039   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:30.699098   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:30.733654   68995 cri.go:89] found id: ""
	I0719 19:37:30.733680   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.733690   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:30.733698   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:30.733757   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:30.766219   68995 cri.go:89] found id: ""
	I0719 19:37:30.766250   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.766260   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:30.766269   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:30.766344   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:30.800884   68995 cri.go:89] found id: ""
	I0719 19:37:30.800912   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.800922   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:30.800931   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:30.800946   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:30.851079   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:30.851114   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:30.863829   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:30.863873   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:30.932194   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:30.932219   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:30.932231   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:31.012345   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:31.012377   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:30.250950   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:32.251737   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:28.982804   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:30.983656   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:32.736607   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:35.232724   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:33.549156   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:33.561253   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:33.561318   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:33.593443   68995 cri.go:89] found id: ""
	I0719 19:37:33.593469   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.593479   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:33.593487   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:33.593545   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:33.626350   68995 cri.go:89] found id: ""
	I0719 19:37:33.626371   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.626380   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:33.626385   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:33.626434   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:33.660751   68995 cri.go:89] found id: ""
	I0719 19:37:33.660789   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.660797   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:33.660803   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:33.660857   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:33.696342   68995 cri.go:89] found id: ""
	I0719 19:37:33.696380   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.696392   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:33.696399   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:33.696465   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:33.728815   68995 cri.go:89] found id: ""
	I0719 19:37:33.728841   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.728850   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:33.728860   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:33.728922   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:33.765055   68995 cri.go:89] found id: ""
	I0719 19:37:33.765088   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.765095   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:33.765101   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:33.765150   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:33.798600   68995 cri.go:89] found id: ""
	I0719 19:37:33.798627   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.798635   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:33.798640   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:33.798687   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:33.831037   68995 cri.go:89] found id: ""
	I0719 19:37:33.831065   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.831075   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:33.831085   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:33.831100   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:33.880454   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:33.880485   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:33.894966   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:33.894996   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:33.964170   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:33.964200   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:33.964215   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:34.046534   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:34.046565   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:36.587779   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:36.601141   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:36.601211   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:36.634499   68995 cri.go:89] found id: ""
	I0719 19:37:36.634527   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.634536   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:36.634542   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:36.634606   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:36.666890   68995 cri.go:89] found id: ""
	I0719 19:37:36.666917   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.666925   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:36.666931   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:36.666986   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:36.698970   68995 cri.go:89] found id: ""
	I0719 19:37:36.699001   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.699016   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:36.699023   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:36.699078   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:36.731334   68995 cri.go:89] found id: ""
	I0719 19:37:36.731368   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.731379   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:36.731386   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:36.731446   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:36.764358   68995 cri.go:89] found id: ""
	I0719 19:37:36.764382   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.764394   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:36.764400   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:36.764455   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:36.798707   68995 cri.go:89] found id: ""
	I0719 19:37:36.798734   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.798742   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:36.798747   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:36.798797   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:36.831226   68995 cri.go:89] found id: ""
	I0719 19:37:36.831255   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.831266   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:36.831273   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:36.831331   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:36.863480   68995 cri.go:89] found id: ""
	I0719 19:37:36.863506   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.863513   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:36.863522   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:36.863536   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:36.915869   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:36.915908   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:36.930750   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:36.930781   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 19:37:34.750463   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:36.751189   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:33.484028   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:35.984592   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:37.233215   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:39.732096   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	W0719 19:37:37.012674   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:37.012694   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:37.012708   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:37.091337   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:37.091375   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:39.630016   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:39.642979   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:39.643041   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:39.675984   68995 cri.go:89] found id: ""
	I0719 19:37:39.676013   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.676023   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:39.676031   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:39.676100   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:39.707035   68995 cri.go:89] found id: ""
	I0719 19:37:39.707061   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.707068   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:39.707073   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:39.707131   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:39.744471   68995 cri.go:89] found id: ""
	I0719 19:37:39.744496   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.744503   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:39.744508   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:39.744561   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:39.778135   68995 cri.go:89] found id: ""
	I0719 19:37:39.778162   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.778171   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:39.778179   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:39.778238   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:39.811690   68995 cri.go:89] found id: ""
	I0719 19:37:39.811717   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.811727   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:39.811735   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:39.811793   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:39.843697   68995 cri.go:89] found id: ""
	I0719 19:37:39.843728   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.843740   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:39.843751   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:39.843801   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:39.877406   68995 cri.go:89] found id: ""
	I0719 19:37:39.877436   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.877448   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:39.877455   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:39.877520   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:39.909759   68995 cri.go:89] found id: ""
	I0719 19:37:39.909783   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.909790   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:39.909799   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:39.909809   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:39.958364   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:39.958393   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:39.973472   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:39.973512   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:40.040299   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:40.040323   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:40.040338   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:40.117663   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:40.117702   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:38.751436   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:40.751476   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:42.752002   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:38.483729   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:40.983938   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:42.232962   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:44.732151   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:42.657713   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:42.672030   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:42.672096   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:42.706163   68995 cri.go:89] found id: ""
	I0719 19:37:42.706193   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.706204   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:42.706212   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:42.706271   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:42.740026   68995 cri.go:89] found id: ""
	I0719 19:37:42.740053   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.740063   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:42.740070   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:42.740132   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:42.772413   68995 cri.go:89] found id: ""
	I0719 19:37:42.772436   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.772443   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:42.772448   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:42.772493   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:42.803898   68995 cri.go:89] found id: ""
	I0719 19:37:42.803925   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.803936   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:42.803943   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:42.804007   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:42.835062   68995 cri.go:89] found id: ""
	I0719 19:37:42.835091   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.835100   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:42.835108   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:42.835181   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:42.867382   68995 cri.go:89] found id: ""
	I0719 19:37:42.867411   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.867420   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:42.867427   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:42.867513   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:42.899461   68995 cri.go:89] found id: ""
	I0719 19:37:42.899483   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.899491   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:42.899496   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:42.899542   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:42.932789   68995 cri.go:89] found id: ""
	I0719 19:37:42.932818   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.932827   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:42.932837   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:42.932854   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:42.945144   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:42.945174   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:43.013369   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:43.013393   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:43.013408   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:43.093227   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:43.093263   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:43.134429   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:43.134454   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:45.684772   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:45.697903   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:45.697958   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:45.731984   68995 cri.go:89] found id: ""
	I0719 19:37:45.732007   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.732015   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:45.732021   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:45.732080   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:45.767631   68995 cri.go:89] found id: ""
	I0719 19:37:45.767660   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.767670   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:45.767678   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:45.767739   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:45.800822   68995 cri.go:89] found id: ""
	I0719 19:37:45.800849   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.800860   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:45.800871   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:45.800928   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:45.834313   68995 cri.go:89] found id: ""
	I0719 19:37:45.834345   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.834354   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:45.834360   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:45.834417   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:45.868659   68995 cri.go:89] found id: ""
	I0719 19:37:45.868690   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.868701   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:45.868709   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:45.868780   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:45.900779   68995 cri.go:89] found id: ""
	I0719 19:37:45.900806   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.900816   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:45.900824   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:45.900886   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:45.932767   68995 cri.go:89] found id: ""
	I0719 19:37:45.932796   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.932806   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:45.932811   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:45.932872   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:45.967785   68995 cri.go:89] found id: ""
	I0719 19:37:45.967814   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.967822   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:45.967839   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:45.967852   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:46.034598   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:46.034623   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:46.034642   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:46.124237   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:46.124286   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:46.164874   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:46.164904   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:46.216679   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:46.216713   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:45.251369   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:47.251447   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:43.483160   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:45.983286   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:47.983452   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:46.732579   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:49.232449   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:51.232677   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:48.730444   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:48.743209   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:48.743283   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:48.780940   68995 cri.go:89] found id: ""
	I0719 19:37:48.780974   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.780982   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:48.780987   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:48.781037   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:48.814186   68995 cri.go:89] found id: ""
	I0719 19:37:48.814213   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.814221   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:48.814230   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:48.814310   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:48.845013   68995 cri.go:89] found id: ""
	I0719 19:37:48.845042   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.845056   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:48.845065   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:48.845139   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:48.878777   68995 cri.go:89] found id: ""
	I0719 19:37:48.878799   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.878807   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:48.878812   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:48.878858   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:48.908926   68995 cri.go:89] found id: ""
	I0719 19:37:48.908954   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.908965   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:48.908972   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:48.909032   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:48.942442   68995 cri.go:89] found id: ""
	I0719 19:37:48.942473   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.942483   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:48.942491   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:48.942554   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:48.974213   68995 cri.go:89] found id: ""
	I0719 19:37:48.974240   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.974249   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:48.974254   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:48.974308   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:49.010041   68995 cri.go:89] found id: ""
	I0719 19:37:49.010067   68995 logs.go:276] 0 containers: []
	W0719 19:37:49.010076   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:49.010085   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:49.010098   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:49.060387   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:49.060419   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:49.073748   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:49.073773   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:49.139397   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:49.139422   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:49.139439   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:49.216748   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:49.216792   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:51.759657   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:51.771457   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:51.771538   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:51.804841   68995 cri.go:89] found id: ""
	I0719 19:37:51.804870   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.804881   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:51.804886   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:51.804937   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:51.838235   68995 cri.go:89] found id: ""
	I0719 19:37:51.838264   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.838274   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:51.838282   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:51.838363   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:51.885664   68995 cri.go:89] found id: ""
	I0719 19:37:51.885689   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.885699   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:51.885706   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:51.885769   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:51.917883   68995 cri.go:89] found id: ""
	I0719 19:37:51.917910   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.917920   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:51.917927   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:51.917990   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:51.949856   68995 cri.go:89] found id: ""
	I0719 19:37:51.949889   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.949898   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:51.949905   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:51.949965   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:51.984016   68995 cri.go:89] found id: ""
	I0719 19:37:51.984060   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.984072   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:51.984081   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:51.984134   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:49.752557   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:52.251462   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:50.483240   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:52.484284   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:53.731744   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:55.733040   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:52.020853   68995 cri.go:89] found id: ""
	I0719 19:37:52.020883   68995 logs.go:276] 0 containers: []
	W0719 19:37:52.020895   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:52.020903   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:52.020958   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:52.054285   68995 cri.go:89] found id: ""
	I0719 19:37:52.054310   68995 logs.go:276] 0 containers: []
	W0719 19:37:52.054321   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:52.054329   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:52.054341   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:52.091083   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:52.091108   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:52.143946   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:52.143981   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:52.158573   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:52.158600   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:52.232029   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:52.232052   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:52.232065   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:54.811805   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:54.825343   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:54.825404   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:54.859915   68995 cri.go:89] found id: ""
	I0719 19:37:54.859943   68995 logs.go:276] 0 containers: []
	W0719 19:37:54.859953   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:54.859961   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:54.860022   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:54.892199   68995 cri.go:89] found id: ""
	I0719 19:37:54.892234   68995 logs.go:276] 0 containers: []
	W0719 19:37:54.892245   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:54.892252   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:54.892316   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:54.925714   68995 cri.go:89] found id: ""
	I0719 19:37:54.925746   68995 logs.go:276] 0 containers: []
	W0719 19:37:54.925757   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:54.925765   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:54.925833   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:54.958677   68995 cri.go:89] found id: ""
	I0719 19:37:54.958717   68995 logs.go:276] 0 containers: []
	W0719 19:37:54.958729   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:54.958736   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:54.958803   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:54.994945   68995 cri.go:89] found id: ""
	I0719 19:37:54.994976   68995 logs.go:276] 0 containers: []
	W0719 19:37:54.994986   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:54.994994   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:54.995073   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:55.028149   68995 cri.go:89] found id: ""
	I0719 19:37:55.028175   68995 logs.go:276] 0 containers: []
	W0719 19:37:55.028185   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:55.028194   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:55.028255   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:55.062698   68995 cri.go:89] found id: ""
	I0719 19:37:55.062731   68995 logs.go:276] 0 containers: []
	W0719 19:37:55.062744   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:55.062753   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:55.062812   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:55.095864   68995 cri.go:89] found id: ""
	I0719 19:37:55.095892   68995 logs.go:276] 0 containers: []
	W0719 19:37:55.095903   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:55.095913   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:55.095928   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:55.152201   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:55.152235   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:55.165503   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:55.165528   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:55.232405   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:55.232429   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:55.232444   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:55.315444   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:55.315496   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:54.752510   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:57.251379   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:54.984369   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:57.482226   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:58.231419   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:00.733424   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:57.852984   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:57.865718   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:57.865784   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:57.898531   68995 cri.go:89] found id: ""
	I0719 19:37:57.898557   68995 logs.go:276] 0 containers: []
	W0719 19:37:57.898565   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:57.898570   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:57.898625   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:57.933746   68995 cri.go:89] found id: ""
	I0719 19:37:57.933774   68995 logs.go:276] 0 containers: []
	W0719 19:37:57.933784   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:57.933791   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:57.933851   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:57.974429   68995 cri.go:89] found id: ""
	I0719 19:37:57.974466   68995 logs.go:276] 0 containers: []
	W0719 19:37:57.974479   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:57.974488   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:57.974560   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:58.011209   68995 cri.go:89] found id: ""
	I0719 19:37:58.011233   68995 logs.go:276] 0 containers: []
	W0719 19:37:58.011243   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:58.011249   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:58.011313   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:58.049309   68995 cri.go:89] found id: ""
	I0719 19:37:58.049337   68995 logs.go:276] 0 containers: []
	W0719 19:37:58.049349   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:58.049357   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:58.049419   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:58.080873   68995 cri.go:89] found id: ""
	I0719 19:37:58.080902   68995 logs.go:276] 0 containers: []
	W0719 19:37:58.080913   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:58.080920   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:58.080979   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:58.113661   68995 cri.go:89] found id: ""
	I0719 19:37:58.113689   68995 logs.go:276] 0 containers: []
	W0719 19:37:58.113698   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:58.113706   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:58.113845   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:58.147431   68995 cri.go:89] found id: ""
	I0719 19:37:58.147459   68995 logs.go:276] 0 containers: []
	W0719 19:37:58.147469   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:58.147480   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:58.147495   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:58.160170   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:58.160195   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:58.222818   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:58.222858   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:58.222874   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:58.304895   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:58.304929   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:58.354462   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:58.354492   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:00.905865   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:00.918105   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:00.918180   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:00.952085   68995 cri.go:89] found id: ""
	I0719 19:38:00.952111   68995 logs.go:276] 0 containers: []
	W0719 19:38:00.952122   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:00.952130   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:00.952192   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:00.985966   68995 cri.go:89] found id: ""
	I0719 19:38:00.985990   68995 logs.go:276] 0 containers: []
	W0719 19:38:00.985999   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:00.986011   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:00.986059   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:01.021869   68995 cri.go:89] found id: ""
	I0719 19:38:01.021901   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.021910   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:01.021915   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:01.021961   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:01.055821   68995 cri.go:89] found id: ""
	I0719 19:38:01.055867   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.055878   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:01.055886   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:01.055950   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:01.090922   68995 cri.go:89] found id: ""
	I0719 19:38:01.090945   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.090952   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:01.090958   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:01.091007   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:01.122779   68995 cri.go:89] found id: ""
	I0719 19:38:01.122808   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.122824   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:01.122831   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:01.122896   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:01.159436   68995 cri.go:89] found id: ""
	I0719 19:38:01.159464   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.159473   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:01.159479   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:01.159531   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:01.191500   68995 cri.go:89] found id: ""
	I0719 19:38:01.191524   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.191532   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:01.191541   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:01.191551   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:01.227143   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:01.227173   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:01.278336   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:01.278363   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:01.291658   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:01.291687   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:01.357320   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:01.357346   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:01.357362   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:59.251599   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:01.252565   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:59.482653   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:01.483865   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:03.232078   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:05.731641   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:03.938679   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:03.951235   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:03.951295   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:03.987844   68995 cri.go:89] found id: ""
	I0719 19:38:03.987875   68995 logs.go:276] 0 containers: []
	W0719 19:38:03.987887   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:03.987894   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:03.987959   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:04.030628   68995 cri.go:89] found id: ""
	I0719 19:38:04.030654   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.030665   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:04.030672   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:04.030744   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:04.077913   68995 cri.go:89] found id: ""
	I0719 19:38:04.077942   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.077950   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:04.077955   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:04.078022   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:04.130065   68995 cri.go:89] found id: ""
	I0719 19:38:04.130098   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.130109   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:04.130116   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:04.130181   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:04.162061   68995 cri.go:89] found id: ""
	I0719 19:38:04.162089   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.162097   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:04.162103   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:04.162163   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:04.194526   68995 cri.go:89] found id: ""
	I0719 19:38:04.194553   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.194560   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:04.194566   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:04.194613   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:04.226387   68995 cri.go:89] found id: ""
	I0719 19:38:04.226418   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.226429   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:04.226436   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:04.226569   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:04.263346   68995 cri.go:89] found id: ""
	I0719 19:38:04.263375   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.263393   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:04.263404   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:04.263420   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:04.321110   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:04.321140   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:04.334294   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:04.334318   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:04.398556   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:04.398577   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:04.398592   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:04.485108   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:04.485139   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:03.751057   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:05.751335   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:03.983215   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:06.482374   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:08.231511   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:10.232368   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:07.022235   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:07.034595   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:07.034658   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:07.067454   68995 cri.go:89] found id: ""
	I0719 19:38:07.067482   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.067492   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:07.067499   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:07.067556   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:07.101154   68995 cri.go:89] found id: ""
	I0719 19:38:07.101183   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.101192   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:07.101206   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:07.101269   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:07.136084   68995 cri.go:89] found id: ""
	I0719 19:38:07.136110   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.136118   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:07.136123   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:07.136173   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:07.173278   68995 cri.go:89] found id: ""
	I0719 19:38:07.173304   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.173312   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:07.173317   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:07.173380   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:07.209680   68995 cri.go:89] found id: ""
	I0719 19:38:07.209707   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.209719   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:07.209727   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:07.209788   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:07.242425   68995 cri.go:89] found id: ""
	I0719 19:38:07.242453   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.242462   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:07.242471   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:07.242538   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:07.276112   68995 cri.go:89] found id: ""
	I0719 19:38:07.276137   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.276147   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:07.276153   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:07.276216   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:07.309460   68995 cri.go:89] found id: ""
	I0719 19:38:07.309489   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.309500   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:07.309514   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:07.309528   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:07.361611   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:07.361650   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:07.375905   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:07.375942   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:07.441390   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:07.441412   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:07.441426   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:07.522023   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:07.522068   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:10.061774   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:10.074620   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:10.074705   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:10.107260   68995 cri.go:89] found id: ""
	I0719 19:38:10.107284   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.107293   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:10.107298   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:10.107345   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:10.142704   68995 cri.go:89] found id: ""
	I0719 19:38:10.142731   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.142738   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:10.142744   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:10.142793   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:10.174713   68995 cri.go:89] found id: ""
	I0719 19:38:10.174751   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.174762   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:10.174770   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:10.174837   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:10.205687   68995 cri.go:89] found id: ""
	I0719 19:38:10.205711   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.205719   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:10.205724   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:10.205782   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:10.238754   68995 cri.go:89] found id: ""
	I0719 19:38:10.238781   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.238789   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:10.238795   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:10.238844   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:10.271239   68995 cri.go:89] found id: ""
	I0719 19:38:10.271264   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.271274   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:10.271281   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:10.271380   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:10.306944   68995 cri.go:89] found id: ""
	I0719 19:38:10.306968   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.306975   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:10.306980   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:10.307030   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:10.336681   68995 cri.go:89] found id: ""
	I0719 19:38:10.336708   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.336716   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:10.336725   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:10.336736   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:10.376368   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:10.376405   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:10.432676   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:10.432722   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:10.446932   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:10.446965   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:10.514285   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:10.514307   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:10.514335   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:08.251480   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:10.252276   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:12.751674   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:08.482753   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:10.483719   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:12.982523   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:12.732172   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:15.232040   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:13.097845   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:13.110781   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:13.110862   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:13.144732   68995 cri.go:89] found id: ""
	I0719 19:38:13.144760   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.144772   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:13.144779   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:13.144840   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:13.176313   68995 cri.go:89] found id: ""
	I0719 19:38:13.176356   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.176364   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:13.176372   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:13.176437   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:13.208219   68995 cri.go:89] found id: ""
	I0719 19:38:13.208249   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.208260   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:13.208267   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:13.208324   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:13.242977   68995 cri.go:89] found id: ""
	I0719 19:38:13.243004   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.243014   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:13.243022   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:13.243080   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:13.275356   68995 cri.go:89] found id: ""
	I0719 19:38:13.275382   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.275389   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:13.275396   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:13.275443   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:13.310244   68995 cri.go:89] found id: ""
	I0719 19:38:13.310275   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.310283   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:13.310290   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:13.310356   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:13.342626   68995 cri.go:89] found id: ""
	I0719 19:38:13.342652   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.342660   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:13.342667   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:13.342737   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:13.374943   68995 cri.go:89] found id: ""
	I0719 19:38:13.374965   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.374973   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:13.374983   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:13.374993   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:13.429940   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:13.429975   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:13.443636   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:13.443677   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:13.514439   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:13.514468   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:13.514481   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:13.595337   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:13.595371   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:16.132302   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:16.145088   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:16.145148   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:16.176994   68995 cri.go:89] found id: ""
	I0719 19:38:16.177023   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.177034   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:16.177042   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:16.177103   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:16.208730   68995 cri.go:89] found id: ""
	I0719 19:38:16.208760   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.208771   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:16.208779   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:16.208843   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:16.244708   68995 cri.go:89] found id: ""
	I0719 19:38:16.244736   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.244747   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:16.244755   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:16.244816   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:16.279211   68995 cri.go:89] found id: ""
	I0719 19:38:16.279237   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.279245   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:16.279250   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:16.279328   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:16.312856   68995 cri.go:89] found id: ""
	I0719 19:38:16.312891   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.312902   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:16.312912   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:16.312980   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:16.347598   68995 cri.go:89] found id: ""
	I0719 19:38:16.347623   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.347630   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:16.347635   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:16.347695   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:16.381927   68995 cri.go:89] found id: ""
	I0719 19:38:16.381957   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.381966   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:16.381973   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:16.382035   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:16.415020   68995 cri.go:89] found id: ""
	I0719 19:38:16.415053   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.415061   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:16.415071   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:16.415084   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:16.428268   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:16.428304   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:16.498869   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:16.498887   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:16.498902   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:16.581237   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:16.581271   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:16.616727   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:16.616755   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:15.251690   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:17.752442   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:15.483131   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:17.484337   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:17.232857   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:19.233157   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:19.172304   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:19.185206   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:19.185282   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:19.218156   68995 cri.go:89] found id: ""
	I0719 19:38:19.218189   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.218200   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:19.218207   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:19.218277   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:19.251173   68995 cri.go:89] found id: ""
	I0719 19:38:19.251201   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.251211   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:19.251219   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:19.251280   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:19.283537   68995 cri.go:89] found id: ""
	I0719 19:38:19.283566   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.283578   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:19.283586   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:19.283654   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:19.315995   68995 cri.go:89] found id: ""
	I0719 19:38:19.316026   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.316047   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:19.316054   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:19.316130   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:19.348917   68995 cri.go:89] found id: ""
	I0719 19:38:19.348940   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.348949   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:19.348954   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:19.349001   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:19.382174   68995 cri.go:89] found id: ""
	I0719 19:38:19.382206   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.382220   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:19.382228   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:19.382290   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:19.416178   68995 cri.go:89] found id: ""
	I0719 19:38:19.416208   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.416217   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:19.416222   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:19.416268   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:19.449333   68995 cri.go:89] found id: ""
	I0719 19:38:19.449364   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.449375   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:19.449385   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:19.449397   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:19.503673   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:19.503708   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:19.517815   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:19.517847   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:19.583811   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:19.583849   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:19.583865   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:19.658742   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:19.658780   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:20.252451   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:22.752922   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:19.982854   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:21.984114   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:21.732342   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:24.231957   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:26.232440   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:22.197792   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:22.211464   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:22.211526   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:22.248860   68995 cri.go:89] found id: ""
	I0719 19:38:22.248891   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.248899   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:22.248905   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:22.248967   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:22.293173   68995 cri.go:89] found id: ""
	I0719 19:38:22.293198   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.293206   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:22.293212   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:22.293268   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:22.328826   68995 cri.go:89] found id: ""
	I0719 19:38:22.328857   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.328868   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:22.328875   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:22.328939   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:22.361560   68995 cri.go:89] found id: ""
	I0719 19:38:22.361590   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.361601   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:22.361608   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:22.361668   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:22.400899   68995 cri.go:89] found id: ""
	I0719 19:38:22.400927   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.400935   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:22.400941   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:22.400988   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:22.437159   68995 cri.go:89] found id: ""
	I0719 19:38:22.437185   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.437195   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:22.437202   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:22.437264   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:22.472424   68995 cri.go:89] found id: ""
	I0719 19:38:22.472454   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.472464   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:22.472472   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:22.472532   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:22.507649   68995 cri.go:89] found id: ""
	I0719 19:38:22.507678   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.507688   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:22.507704   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:22.507718   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:22.559616   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:22.559649   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:22.573436   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:22.573468   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:22.643245   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:22.643279   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:22.643319   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:22.721532   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:22.721571   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:25.264435   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:25.277324   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:25.277400   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:25.308328   68995 cri.go:89] found id: ""
	I0719 19:38:25.308359   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.308370   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:25.308377   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:25.308429   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:25.344100   68995 cri.go:89] found id: ""
	I0719 19:38:25.344130   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.344140   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:25.344148   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:25.344207   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:25.379393   68995 cri.go:89] found id: ""
	I0719 19:38:25.379430   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.379440   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:25.379447   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:25.379495   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:25.417344   68995 cri.go:89] found id: ""
	I0719 19:38:25.417369   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.417377   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:25.417383   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:25.417467   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:25.450753   68995 cri.go:89] found id: ""
	I0719 19:38:25.450778   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.450787   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:25.450793   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:25.450845   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:25.483479   68995 cri.go:89] found id: ""
	I0719 19:38:25.483503   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.483512   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:25.483520   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:25.483585   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:25.514954   68995 cri.go:89] found id: ""
	I0719 19:38:25.514978   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.514986   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:25.514992   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:25.515040   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:25.547802   68995 cri.go:89] found id: ""
	I0719 19:38:25.547828   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.547847   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:25.547858   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:25.547872   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:25.599643   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:25.599679   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:25.612587   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:25.612613   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:25.680267   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:25.680283   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:25.680294   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:25.764185   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:25.764215   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:25.253957   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:27.751614   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:24.482926   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:26.982157   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:28.731943   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:30.732225   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:28.309615   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:28.322513   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:28.322590   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:28.354692   68995 cri.go:89] found id: ""
	I0719 19:38:28.354728   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.354738   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:28.354746   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:28.354819   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:28.392144   68995 cri.go:89] found id: ""
	I0719 19:38:28.392170   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.392180   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:28.392187   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:28.392245   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:28.428196   68995 cri.go:89] found id: ""
	I0719 19:38:28.428226   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.428235   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:28.428243   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:28.428303   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:28.460353   68995 cri.go:89] found id: ""
	I0719 19:38:28.460385   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.460396   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:28.460403   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:28.460465   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:28.493143   68995 cri.go:89] found id: ""
	I0719 19:38:28.493168   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.493177   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:28.493184   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:28.493240   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:28.527379   68995 cri.go:89] found id: ""
	I0719 19:38:28.527404   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.527414   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:28.527422   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:28.527477   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:28.561506   68995 cri.go:89] found id: ""
	I0719 19:38:28.561531   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.561547   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:28.561555   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:28.561615   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:28.593832   68995 cri.go:89] found id: ""
	I0719 19:38:28.593859   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.593868   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:28.593878   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:28.593894   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:28.606980   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:28.607011   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:28.680957   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:28.680984   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:28.680997   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:28.755705   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:28.755738   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:28.793362   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:28.793397   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:31.343185   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:31.355903   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:31.355980   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:31.390158   68995 cri.go:89] found id: ""
	I0719 19:38:31.390188   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.390198   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:31.390203   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:31.390256   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:31.422743   68995 cri.go:89] found id: ""
	I0719 19:38:31.422777   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.422788   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:31.422796   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:31.422858   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:31.457886   68995 cri.go:89] found id: ""
	I0719 19:38:31.457912   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.457922   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:31.457927   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:31.457987   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:31.493625   68995 cri.go:89] found id: ""
	I0719 19:38:31.493648   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.493655   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:31.493670   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:31.493724   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:31.525764   68995 cri.go:89] found id: ""
	I0719 19:38:31.525789   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.525797   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:31.525803   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:31.525857   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:31.558453   68995 cri.go:89] found id: ""
	I0719 19:38:31.558478   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.558489   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:31.558499   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:31.558565   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:31.597875   68995 cri.go:89] found id: ""
	I0719 19:38:31.597901   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.597911   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:31.597918   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:31.597979   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:31.631688   68995 cri.go:89] found id: ""
	I0719 19:38:31.631712   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.631722   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:31.631730   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:31.631744   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:31.696991   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:31.697017   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:31.697032   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:31.782668   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:31.782712   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:31.822052   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:31.822091   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:31.876818   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:31.876857   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:30.250702   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:32.251104   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:29.482806   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:31.483225   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:33.232825   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:35.731586   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:34.390680   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:34.403662   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:34.403718   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:34.437627   68995 cri.go:89] found id: ""
	I0719 19:38:34.437653   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.437661   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:34.437668   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:34.437722   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:34.470906   68995 cri.go:89] found id: ""
	I0719 19:38:34.470935   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.470947   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:34.470955   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:34.471019   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:34.504424   68995 cri.go:89] found id: ""
	I0719 19:38:34.504452   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.504464   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:34.504472   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:34.504530   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:34.538610   68995 cri.go:89] found id: ""
	I0719 19:38:34.538638   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.538649   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:34.538657   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:34.538728   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:34.571473   68995 cri.go:89] found id: ""
	I0719 19:38:34.571500   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.571511   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:34.571518   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:34.571592   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:34.608465   68995 cri.go:89] found id: ""
	I0719 19:38:34.608499   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.608509   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:34.608516   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:34.608581   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:34.650110   68995 cri.go:89] found id: ""
	I0719 19:38:34.650140   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.650151   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:34.650158   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:34.650220   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:34.683386   68995 cri.go:89] found id: ""
	I0719 19:38:34.683409   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.683416   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:34.683427   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:34.683443   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:34.736264   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:34.736294   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:34.751293   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:34.751338   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:34.818817   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:34.818838   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:34.818854   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:34.899383   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:34.899419   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:34.252210   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:36.754343   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:33.984014   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:35.984811   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:37.732623   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:40.231802   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:37.438455   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:37.451494   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:37.451563   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:37.485908   68995 cri.go:89] found id: ""
	I0719 19:38:37.485934   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.485943   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:37.485950   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:37.486006   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:37.519156   68995 cri.go:89] found id: ""
	I0719 19:38:37.519183   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.519193   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:37.519200   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:37.519257   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:37.561498   68995 cri.go:89] found id: ""
	I0719 19:38:37.561530   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.561541   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:37.561548   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:37.561608   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:37.598971   68995 cri.go:89] found id: ""
	I0719 19:38:37.599002   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.599012   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:37.599019   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:37.599075   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:37.641257   68995 cri.go:89] found id: ""
	I0719 19:38:37.641304   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.641315   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:37.641323   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:37.641376   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:37.676572   68995 cri.go:89] found id: ""
	I0719 19:38:37.676596   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.676605   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:37.676610   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:37.676671   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:37.712105   68995 cri.go:89] found id: ""
	I0719 19:38:37.712131   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.712139   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:37.712145   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:37.712201   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:37.747850   68995 cri.go:89] found id: ""
	I0719 19:38:37.747879   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.747889   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:37.747899   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:37.747912   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:37.800904   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:37.800939   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:37.814264   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:37.814293   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:37.891872   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:37.891897   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:37.891913   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:37.967997   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:37.968033   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:40.511969   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:40.525191   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:40.525259   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:40.559619   68995 cri.go:89] found id: ""
	I0719 19:38:40.559651   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.559663   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:40.559671   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:40.559730   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:40.592775   68995 cri.go:89] found id: ""
	I0719 19:38:40.592804   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.592812   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:40.592817   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:40.592873   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:40.625524   68995 cri.go:89] found id: ""
	I0719 19:38:40.625546   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.625553   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:40.625558   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:40.625624   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:40.662117   68995 cri.go:89] found id: ""
	I0719 19:38:40.662151   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.662163   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:40.662171   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:40.662231   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:40.694835   68995 cri.go:89] found id: ""
	I0719 19:38:40.694868   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.694877   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:40.694885   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:40.694944   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:40.726879   68995 cri.go:89] found id: ""
	I0719 19:38:40.726905   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.726913   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:40.726919   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:40.726975   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:40.764663   68995 cri.go:89] found id: ""
	I0719 19:38:40.764688   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.764700   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:40.764707   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:40.764752   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:40.796684   68995 cri.go:89] found id: ""
	I0719 19:38:40.796711   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.796718   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:40.796726   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:40.796736   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:40.834549   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:40.834588   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:40.886521   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:40.886561   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:40.900162   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:40.900199   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:40.969263   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:40.969285   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:40.969319   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:39.250513   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:41.251724   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:38.482887   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:40.484413   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:42.982430   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:42.232190   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:44.232617   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:43.545375   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:43.559091   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:43.559174   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:43.591901   68995 cri.go:89] found id: ""
	I0719 19:38:43.591929   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.591939   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:43.591946   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:43.592009   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:43.656005   68995 cri.go:89] found id: ""
	I0719 19:38:43.656035   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.656046   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:43.656055   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:43.656126   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:43.690049   68995 cri.go:89] found id: ""
	I0719 19:38:43.690073   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.690085   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:43.690092   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:43.690150   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:43.723449   68995 cri.go:89] found id: ""
	I0719 19:38:43.723475   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.723483   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:43.723490   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:43.723565   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:43.759683   68995 cri.go:89] found id: ""
	I0719 19:38:43.759710   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.759718   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:43.759724   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:43.759781   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:43.795127   68995 cri.go:89] found id: ""
	I0719 19:38:43.795154   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.795162   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:43.795179   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:43.795225   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:43.832493   68995 cri.go:89] found id: ""
	I0719 19:38:43.832522   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.832530   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:43.832536   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:43.832588   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:43.868651   68995 cri.go:89] found id: ""
	I0719 19:38:43.868686   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.868697   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:43.868709   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:43.868724   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:43.908054   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:43.908096   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:43.960508   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:43.960546   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:43.973793   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:43.973817   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:44.046185   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:44.046210   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:44.046224   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:46.628410   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:46.640786   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:46.640853   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:46.672921   68995 cri.go:89] found id: ""
	I0719 19:38:46.672950   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.672960   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:46.672967   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:46.673030   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:46.706817   68995 cri.go:89] found id: ""
	I0719 19:38:46.706846   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.706857   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:46.706864   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:46.706922   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:46.741130   68995 cri.go:89] found id: ""
	I0719 19:38:46.741151   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.741158   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:46.741163   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:46.741207   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:46.778639   68995 cri.go:89] found id: ""
	I0719 19:38:46.778664   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.778671   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:46.778677   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:46.778724   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:46.812605   68995 cri.go:89] found id: ""
	I0719 19:38:46.812632   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.812648   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:46.812656   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:46.812709   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:46.848459   68995 cri.go:89] found id: ""
	I0719 19:38:46.848490   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.848499   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:46.848506   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:46.848554   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:46.883444   68995 cri.go:89] found id: ""
	I0719 19:38:46.883475   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.883486   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:46.883494   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:46.883553   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:46.916932   68995 cri.go:89] found id: ""
	I0719 19:38:46.916959   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.916974   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:46.916983   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:46.916994   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:46.967243   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:46.967273   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:46.980786   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:46.980814   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 19:38:43.253266   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:45.752678   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:45.482055   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:47.482299   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:46.733205   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:49.232365   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	W0719 19:38:47.050672   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:47.050698   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:47.050713   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:47.136521   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:47.136557   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:49.673673   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:49.687071   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:49.687150   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:49.720268   68995 cri.go:89] found id: ""
	I0719 19:38:49.720296   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.720305   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:49.720313   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:49.720375   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:49.754485   68995 cri.go:89] found id: ""
	I0719 19:38:49.754512   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.754520   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:49.754527   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:49.754595   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:49.786533   68995 cri.go:89] found id: ""
	I0719 19:38:49.786557   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.786566   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:49.786573   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:49.786637   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:49.819719   68995 cri.go:89] found id: ""
	I0719 19:38:49.819749   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.819758   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:49.819764   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:49.819829   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:49.856826   68995 cri.go:89] found id: ""
	I0719 19:38:49.856859   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.856869   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:49.856876   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:49.856942   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:49.891737   68995 cri.go:89] found id: ""
	I0719 19:38:49.891763   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.891774   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:49.891781   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:49.891860   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:49.924660   68995 cri.go:89] found id: ""
	I0719 19:38:49.924690   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.924699   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:49.924704   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:49.924761   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:49.956896   68995 cri.go:89] found id: ""
	I0719 19:38:49.956925   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.956936   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:49.956946   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:49.956959   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:50.012836   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:50.012867   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:50.028269   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:50.028292   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:50.128352   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:50.128380   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:50.128392   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:50.205971   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:50.206004   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:48.252120   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:50.751581   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:49.482679   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:51.984263   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:51.732882   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:54.232300   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:56.235372   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:52.742970   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:52.756222   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:52.756294   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:52.789556   68995 cri.go:89] found id: ""
	I0719 19:38:52.789585   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.789595   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:52.789601   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:52.789660   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:52.820745   68995 cri.go:89] found id: ""
	I0719 19:38:52.820790   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.820803   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:52.820811   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:52.820878   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:52.854667   68995 cri.go:89] found id: ""
	I0719 19:38:52.854698   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.854709   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:52.854716   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:52.854775   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:52.887125   68995 cri.go:89] found id: ""
	I0719 19:38:52.887152   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.887162   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:52.887167   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:52.887215   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:52.920824   68995 cri.go:89] found id: ""
	I0719 19:38:52.920861   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.920872   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:52.920878   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:52.920941   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:52.952819   68995 cri.go:89] found id: ""
	I0719 19:38:52.952848   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.952856   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:52.952861   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:52.952915   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:52.985508   68995 cri.go:89] found id: ""
	I0719 19:38:52.985535   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.985545   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:52.985553   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:52.985617   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:53.018633   68995 cri.go:89] found id: ""
	I0719 19:38:53.018660   68995 logs.go:276] 0 containers: []
	W0719 19:38:53.018669   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:53.018678   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:53.018690   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:53.071496   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:53.071533   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:53.084685   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:53.084709   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:53.146579   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:53.146603   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:53.146617   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:53.227521   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:53.227552   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:55.768276   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:55.782991   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:55.783075   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:55.825184   68995 cri.go:89] found id: ""
	I0719 19:38:55.825206   68995 logs.go:276] 0 containers: []
	W0719 19:38:55.825214   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:55.825220   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:55.825282   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:55.872997   68995 cri.go:89] found id: ""
	I0719 19:38:55.873024   68995 logs.go:276] 0 containers: []
	W0719 19:38:55.873033   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:55.873041   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:55.873106   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:55.911478   68995 cri.go:89] found id: ""
	I0719 19:38:55.911510   68995 logs.go:276] 0 containers: []
	W0719 19:38:55.911521   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:55.911530   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:55.911654   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:55.943930   68995 cri.go:89] found id: ""
	I0719 19:38:55.943960   68995 logs.go:276] 0 containers: []
	W0719 19:38:55.943971   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:55.943978   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:55.944046   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:55.977743   68995 cri.go:89] found id: ""
	I0719 19:38:55.977771   68995 logs.go:276] 0 containers: []
	W0719 19:38:55.977781   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:55.977788   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:55.977855   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:56.014436   68995 cri.go:89] found id: ""
	I0719 19:38:56.014467   68995 logs.go:276] 0 containers: []
	W0719 19:38:56.014479   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:56.014486   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:56.014561   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:56.047209   68995 cri.go:89] found id: ""
	I0719 19:38:56.047244   68995 logs.go:276] 0 containers: []
	W0719 19:38:56.047255   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:56.047263   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:56.047348   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:56.079632   68995 cri.go:89] found id: ""
	I0719 19:38:56.079655   68995 logs.go:276] 0 containers: []
	W0719 19:38:56.079663   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:56.079671   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:56.079685   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:56.153319   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:56.153343   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:56.153356   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:56.248187   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:56.248228   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:56.291329   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:56.291354   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:56.346462   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:56.346499   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:53.251881   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:55.751678   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:54.484732   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:56.982643   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:58.733025   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:00.731479   68617 pod_ready.go:81] duration metric: took 4m0.005795927s for pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace to be "Ready" ...
	E0719 19:39:00.731502   68617 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0719 19:39:00.731509   68617 pod_ready.go:38] duration metric: took 4m4.040942986s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:39:00.731523   68617 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:39:00.731547   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:39:00.731600   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:39:00.786510   68617 cri.go:89] found id: "82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5"
	I0719 19:39:00.786531   68617 cri.go:89] found id: ""
	I0719 19:39:00.786539   68617 logs.go:276] 1 containers: [82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5]
	I0719 19:39:00.786595   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:00.791234   68617 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:39:00.791294   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:39:00.826465   68617 cri.go:89] found id: "46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c"
	I0719 19:39:00.826492   68617 cri.go:89] found id: ""
	I0719 19:39:00.826503   68617 logs.go:276] 1 containers: [46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c]
	I0719 19:39:00.826562   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:00.830418   68617 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:39:00.830481   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:39:00.865831   68617 cri.go:89] found id: "9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb"
	I0719 19:39:00.865851   68617 cri.go:89] found id: ""
	I0719 19:39:00.865859   68617 logs.go:276] 1 containers: [9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb]
	I0719 19:39:00.865903   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:00.870162   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:39:00.870209   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:39:00.912742   68617 cri.go:89] found id: "560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e"
	I0719 19:39:00.912766   68617 cri.go:89] found id: ""
	I0719 19:39:00.912775   68617 logs.go:276] 1 containers: [560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e]
	I0719 19:39:00.912829   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:00.916725   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:39:00.916777   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:39:00.951824   68617 cri.go:89] found id: "4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e"
	I0719 19:39:00.951861   68617 cri.go:89] found id: ""
	I0719 19:39:00.951871   68617 logs.go:276] 1 containers: [4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e]
	I0719 19:39:00.951920   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:00.955637   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:39:00.955702   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:39:00.990611   68617 cri.go:89] found id: "4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498"
	I0719 19:39:00.990637   68617 cri.go:89] found id: ""
	I0719 19:39:00.990645   68617 logs.go:276] 1 containers: [4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498]
	I0719 19:39:00.990690   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:00.994506   68617 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:39:00.994568   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:39:01.030959   68617 cri.go:89] found id: ""
	I0719 19:39:01.030991   68617 logs.go:276] 0 containers: []
	W0719 19:39:01.031001   68617 logs.go:278] No container was found matching "kindnet"
	I0719 19:39:01.031007   68617 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 19:39:01.031069   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 19:39:01.068897   68617 cri.go:89] found id: "aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040"
	I0719 19:39:01.068928   68617 cri.go:89] found id: "f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91"
	I0719 19:39:01.068934   68617 cri.go:89] found id: ""
	I0719 19:39:01.068945   68617 logs.go:276] 2 containers: [aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040 f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91]
	I0719 19:39:01.069006   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:01.073087   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:01.076976   68617 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:39:01.076994   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:58.861739   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:58.874774   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:58.874842   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:58.913102   68995 cri.go:89] found id: ""
	I0719 19:38:58.913130   68995 logs.go:276] 0 containers: []
	W0719 19:38:58.913138   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:58.913143   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:58.913201   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:58.949367   68995 cri.go:89] found id: ""
	I0719 19:38:58.949397   68995 logs.go:276] 0 containers: []
	W0719 19:38:58.949407   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:58.949415   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:58.949483   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:58.984133   68995 cri.go:89] found id: ""
	I0719 19:38:58.984152   68995 logs.go:276] 0 containers: []
	W0719 19:38:58.984158   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:58.984164   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:58.984224   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:59.016356   68995 cri.go:89] found id: ""
	I0719 19:38:59.016383   68995 logs.go:276] 0 containers: []
	W0719 19:38:59.016393   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:59.016401   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:59.016469   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:59.049981   68995 cri.go:89] found id: ""
	I0719 19:38:59.050009   68995 logs.go:276] 0 containers: []
	W0719 19:38:59.050017   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:59.050027   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:59.050087   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:59.082248   68995 cri.go:89] found id: ""
	I0719 19:38:59.082270   68995 logs.go:276] 0 containers: []
	W0719 19:38:59.082278   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:59.082284   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:59.082358   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:59.115980   68995 cri.go:89] found id: ""
	I0719 19:38:59.116011   68995 logs.go:276] 0 containers: []
	W0719 19:38:59.116021   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:59.116028   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:59.116102   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:59.148379   68995 cri.go:89] found id: ""
	I0719 19:38:59.148404   68995 logs.go:276] 0 containers: []
	W0719 19:38:59.148412   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:59.148420   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:59.148432   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:59.161066   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:59.161099   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:59.225628   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:59.225658   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:59.225669   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:59.311640   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:59.311682   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:59.360964   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:59.360997   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:39:01.915091   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:39:01.928928   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:39:01.928987   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:39:01.962436   68995 cri.go:89] found id: ""
	I0719 19:39:01.962472   68995 logs.go:276] 0 containers: []
	W0719 19:39:01.962483   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:39:01.962490   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:39:01.962559   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:58.251636   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:00.752163   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:58.983433   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:01.482798   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:01.626674   68617 logs.go:123] Gathering logs for kube-apiserver [82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5] ...
	I0719 19:39:01.626731   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5"
	I0719 19:39:01.670756   68617 logs.go:123] Gathering logs for coredns [9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb] ...
	I0719 19:39:01.670787   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb"
	I0719 19:39:01.706541   68617 logs.go:123] Gathering logs for kube-controller-manager [4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498] ...
	I0719 19:39:01.706578   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498"
	I0719 19:39:01.756534   68617 logs.go:123] Gathering logs for etcd [46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c] ...
	I0719 19:39:01.756570   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c"
	I0719 19:39:01.805577   68617 logs.go:123] Gathering logs for kube-scheduler [560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e] ...
	I0719 19:39:01.805613   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e"
	I0719 19:39:01.839258   68617 logs.go:123] Gathering logs for kube-proxy [4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e] ...
	I0719 19:39:01.839288   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e"
	I0719 19:39:01.878371   68617 logs.go:123] Gathering logs for storage-provisioner [aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040] ...
	I0719 19:39:01.878404   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040"
	I0719 19:39:01.917151   68617 logs.go:123] Gathering logs for storage-provisioner [f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91] ...
	I0719 19:39:01.917183   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91"
	I0719 19:39:01.956111   68617 logs.go:123] Gathering logs for kubelet ...
	I0719 19:39:01.956149   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:39:02.023029   68617 logs.go:123] Gathering logs for dmesg ...
	I0719 19:39:02.023081   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:39:02.040889   68617 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:39:02.040920   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 19:39:02.188194   68617 logs.go:123] Gathering logs for container status ...
	I0719 19:39:02.188226   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:39:04.739149   68617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:39:04.757462   68617 api_server.go:72] duration metric: took 4m15.770324413s to wait for apiserver process to appear ...
	I0719 19:39:04.757482   68617 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:39:04.757512   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:39:04.757557   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:39:04.796517   68617 cri.go:89] found id: "82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5"
	I0719 19:39:04.796543   68617 cri.go:89] found id: ""
	I0719 19:39:04.796553   68617 logs.go:276] 1 containers: [82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5]
	I0719 19:39:04.796610   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:04.801976   68617 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:39:04.802046   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:39:04.842403   68617 cri.go:89] found id: "46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c"
	I0719 19:39:04.842425   68617 cri.go:89] found id: ""
	I0719 19:39:04.842434   68617 logs.go:276] 1 containers: [46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c]
	I0719 19:39:04.842487   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:04.846620   68617 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:39:04.846694   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:39:04.885318   68617 cri.go:89] found id: "9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb"
	I0719 19:39:04.885343   68617 cri.go:89] found id: ""
	I0719 19:39:04.885353   68617 logs.go:276] 1 containers: [9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb]
	I0719 19:39:04.885411   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:04.889813   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:39:04.889883   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:39:04.928174   68617 cri.go:89] found id: "560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e"
	I0719 19:39:04.928201   68617 cri.go:89] found id: ""
	I0719 19:39:04.928211   68617 logs.go:276] 1 containers: [560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e]
	I0719 19:39:04.928278   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:04.932724   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:39:04.932807   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:39:04.970800   68617 cri.go:89] found id: "4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e"
	I0719 19:39:04.970823   68617 cri.go:89] found id: ""
	I0719 19:39:04.970830   68617 logs.go:276] 1 containers: [4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e]
	I0719 19:39:04.970882   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:04.975294   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:39:04.975369   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:39:05.012666   68617 cri.go:89] found id: "4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498"
	I0719 19:39:05.012690   68617 cri.go:89] found id: ""
	I0719 19:39:05.012699   68617 logs.go:276] 1 containers: [4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498]
	I0719 19:39:05.012766   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:05.016841   68617 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:39:05.016914   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:39:05.061117   68617 cri.go:89] found id: ""
	I0719 19:39:05.061141   68617 logs.go:276] 0 containers: []
	W0719 19:39:05.061151   68617 logs.go:278] No container was found matching "kindnet"
	I0719 19:39:05.061157   68617 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 19:39:05.061216   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 19:39:05.102934   68617 cri.go:89] found id: "aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040"
	I0719 19:39:05.102959   68617 cri.go:89] found id: "f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91"
	I0719 19:39:05.102965   68617 cri.go:89] found id: ""
	I0719 19:39:05.102974   68617 logs.go:276] 2 containers: [aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040 f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91]
	I0719 19:39:05.103029   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:05.107192   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:05.110970   68617 logs.go:123] Gathering logs for kube-scheduler [560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e] ...
	I0719 19:39:05.110990   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e"
	I0719 19:39:05.152862   68617 logs.go:123] Gathering logs for storage-provisioner [aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040] ...
	I0719 19:39:05.152900   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040"
	I0719 19:39:05.194080   68617 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:39:05.194110   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:39:05.657623   68617 logs.go:123] Gathering logs for container status ...
	I0719 19:39:05.657659   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:39:05.703773   68617 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:39:05.703802   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 19:39:05.810811   68617 logs.go:123] Gathering logs for etcd [46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c] ...
	I0719 19:39:05.810842   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c"
	I0719 19:39:05.859793   68617 logs.go:123] Gathering logs for kube-apiserver [82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5] ...
	I0719 19:39:05.859850   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5"
	I0719 19:39:05.911578   68617 logs.go:123] Gathering logs for coredns [9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb] ...
	I0719 19:39:05.911606   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb"
	I0719 19:39:05.947358   68617 logs.go:123] Gathering logs for kube-proxy [4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e] ...
	I0719 19:39:05.947384   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e"
	I0719 19:39:05.990731   68617 logs.go:123] Gathering logs for kube-controller-manager [4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498] ...
	I0719 19:39:05.990761   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498"
	I0719 19:39:06.041223   68617 logs.go:123] Gathering logs for storage-provisioner [f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91] ...
	I0719 19:39:06.041253   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91"
	I0719 19:39:06.076037   68617 logs.go:123] Gathering logs for kubelet ...
	I0719 19:39:06.076068   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:39:06.131177   68617 logs.go:123] Gathering logs for dmesg ...
	I0719 19:39:06.131213   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:39:02.003045   68995 cri.go:89] found id: ""
	I0719 19:39:02.003078   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.003090   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:39:02.003098   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:39:02.003160   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:39:02.041835   68995 cri.go:89] found id: ""
	I0719 19:39:02.041863   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.041880   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:39:02.041888   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:39:02.041954   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:39:02.080285   68995 cri.go:89] found id: ""
	I0719 19:39:02.080340   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.080353   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:39:02.080361   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:39:02.080431   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:39:02.116225   68995 cri.go:89] found id: ""
	I0719 19:39:02.116308   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.116323   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:39:02.116331   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:39:02.116390   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:39:02.154818   68995 cri.go:89] found id: ""
	I0719 19:39:02.154848   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.154860   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:39:02.154868   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:39:02.154931   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:39:02.190689   68995 cri.go:89] found id: ""
	I0719 19:39:02.190715   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.190727   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:39:02.190741   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:39:02.190810   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:39:02.224155   68995 cri.go:89] found id: ""
	I0719 19:39:02.224185   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.224196   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:39:02.224207   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:39:02.224221   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:39:02.295623   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:39:02.295649   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:39:02.295665   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:39:02.378354   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:39:02.378390   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:39:02.416194   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:39:02.416222   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:39:02.470997   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:39:02.471039   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:39:04.985908   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:39:04.998485   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:39:04.998556   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:39:05.032776   68995 cri.go:89] found id: ""
	I0719 19:39:05.032801   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.032812   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:39:05.032819   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:39:05.032878   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:39:05.066650   68995 cri.go:89] found id: ""
	I0719 19:39:05.066677   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.066686   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:39:05.066693   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:39:05.066771   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:39:05.103689   68995 cri.go:89] found id: ""
	I0719 19:39:05.103712   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.103720   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:39:05.103728   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:39:05.103786   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:39:05.141739   68995 cri.go:89] found id: ""
	I0719 19:39:05.141767   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.141778   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:39:05.141785   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:39:05.141844   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:39:05.178535   68995 cri.go:89] found id: ""
	I0719 19:39:05.178566   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.178576   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:39:05.178582   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:39:05.178640   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:39:05.213876   68995 cri.go:89] found id: ""
	I0719 19:39:05.213904   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.213914   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:39:05.213921   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:39:05.213980   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:39:05.255039   68995 cri.go:89] found id: ""
	I0719 19:39:05.255063   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.255070   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:39:05.255076   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:39:05.255131   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:39:05.288508   68995 cri.go:89] found id: ""
	I0719 19:39:05.288536   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.288547   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:39:05.288558   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:39:05.288572   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:39:05.301079   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:39:05.301109   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:39:05.368109   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:39:05.368128   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:39:05.368140   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:39:05.453326   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:39:05.453360   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:39:05.494780   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:39:05.494814   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:39:03.253971   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:05.751964   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:03.983599   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:06.482412   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:08.646117   68617 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0719 19:39:08.651440   68617 api_server.go:279] https://192.168.61.107:8443/healthz returned 200:
	ok
	I0719 19:39:08.652517   68617 api_server.go:141] control plane version: v1.30.3
	I0719 19:39:08.652538   68617 api_server.go:131] duration metric: took 3.895047319s to wait for apiserver health ...
	I0719 19:39:08.652545   68617 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:39:08.652568   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:39:08.652629   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:39:08.688312   68617 cri.go:89] found id: "82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5"
	I0719 19:39:08.688356   68617 cri.go:89] found id: ""
	I0719 19:39:08.688366   68617 logs.go:276] 1 containers: [82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5]
	I0719 19:39:08.688424   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.692993   68617 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:39:08.693046   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:39:08.736900   68617 cri.go:89] found id: "46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c"
	I0719 19:39:08.736920   68617 cri.go:89] found id: ""
	I0719 19:39:08.736927   68617 logs.go:276] 1 containers: [46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c]
	I0719 19:39:08.736981   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.741409   68617 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:39:08.741473   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:39:08.780867   68617 cri.go:89] found id: "9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb"
	I0719 19:39:08.780885   68617 cri.go:89] found id: ""
	I0719 19:39:08.780892   68617 logs.go:276] 1 containers: [9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb]
	I0719 19:39:08.780935   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.785161   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:39:08.785233   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:39:08.818139   68617 cri.go:89] found id: "560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e"
	I0719 19:39:08.818161   68617 cri.go:89] found id: ""
	I0719 19:39:08.818170   68617 logs.go:276] 1 containers: [560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e]
	I0719 19:39:08.818225   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.822066   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:39:08.822116   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:39:08.855310   68617 cri.go:89] found id: "4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e"
	I0719 19:39:08.855333   68617 cri.go:89] found id: ""
	I0719 19:39:08.855344   68617 logs.go:276] 1 containers: [4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e]
	I0719 19:39:08.855402   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.859647   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:39:08.859716   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:39:08.897948   68617 cri.go:89] found id: "4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498"
	I0719 19:39:08.897973   68617 cri.go:89] found id: ""
	I0719 19:39:08.897981   68617 logs.go:276] 1 containers: [4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498]
	I0719 19:39:08.898030   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.901861   68617 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:39:08.901909   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:39:08.940408   68617 cri.go:89] found id: ""
	I0719 19:39:08.940440   68617 logs.go:276] 0 containers: []
	W0719 19:39:08.940451   68617 logs.go:278] No container was found matching "kindnet"
	I0719 19:39:08.940458   68617 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 19:39:08.940516   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 19:39:08.979077   68617 cri.go:89] found id: "aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040"
	I0719 19:39:08.979104   68617 cri.go:89] found id: "f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91"
	I0719 19:39:08.979109   68617 cri.go:89] found id: ""
	I0719 19:39:08.979118   68617 logs.go:276] 2 containers: [aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040 f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91]
	I0719 19:39:08.979185   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.983947   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.987604   68617 logs.go:123] Gathering logs for coredns [9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb] ...
	I0719 19:39:08.987624   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb"
	I0719 19:39:09.021803   68617 logs.go:123] Gathering logs for kube-proxy [4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e] ...
	I0719 19:39:09.021834   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e"
	I0719 19:39:09.064650   68617 logs.go:123] Gathering logs for kube-controller-manager [4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498] ...
	I0719 19:39:09.064677   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498"
	I0719 19:39:09.120779   68617 logs.go:123] Gathering logs for storage-provisioner [f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91] ...
	I0719 19:39:09.120834   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91"
	I0719 19:39:09.155898   68617 logs.go:123] Gathering logs for kubelet ...
	I0719 19:39:09.155926   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:39:09.210461   68617 logs.go:123] Gathering logs for dmesg ...
	I0719 19:39:09.210490   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:39:09.224154   68617 logs.go:123] Gathering logs for kube-apiserver [82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5] ...
	I0719 19:39:09.224181   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5"
	I0719 19:39:09.266941   68617 logs.go:123] Gathering logs for storage-provisioner [aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040] ...
	I0719 19:39:09.266974   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040"
	I0719 19:39:09.306982   68617 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:39:09.307011   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:39:09.689805   68617 logs.go:123] Gathering logs for container status ...
	I0719 19:39:09.689848   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:39:09.733995   68617 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:39:09.734034   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 19:39:09.842306   68617 logs.go:123] Gathering logs for etcd [46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c] ...
	I0719 19:39:09.842339   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c"
	I0719 19:39:09.889811   68617 logs.go:123] Gathering logs for kube-scheduler [560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e] ...
	I0719 19:39:09.889842   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e"
	I0719 19:39:08.056065   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:39:08.070370   68995 kubeadm.go:597] duration metric: took 4m4.520130366s to restartPrimaryControlPlane
	W0719 19:39:08.070441   68995 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 19:39:08.070473   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 19:39:12.432441   68617 system_pods.go:59] 8 kube-system pods found
	I0719 19:39:12.432467   68617 system_pods.go:61] "coredns-7db6d8ff4d-7h48w" [dd691a13-e571-4148-bef0-ba0552277312] Running
	I0719 19:39:12.432472   68617 system_pods.go:61] "etcd-embed-certs-889918" [68eae3c2-11de-4c70-bc28-1f34e1e868fc] Running
	I0719 19:39:12.432476   68617 system_pods.go:61] "kube-apiserver-embed-certs-889918" [48f50846-bc9e-444e-a569-98d28550dcff] Running
	I0719 19:39:12.432480   68617 system_pods.go:61] "kube-controller-manager-embed-certs-889918" [92e4da69-acef-44a2-9301-726c70294727] Running
	I0719 19:39:12.432484   68617 system_pods.go:61] "kube-proxy-5fcgs" [67ef345b-f2ef-4e18-a55c-91e2c08a37b8] Running
	I0719 19:39:12.432487   68617 system_pods.go:61] "kube-scheduler-embed-certs-889918" [f5d74873-1493-42f7-92a7-5d2b98929025] Running
	I0719 19:39:12.432493   68617 system_pods.go:61] "metrics-server-569cc877fc-sfc85" [fc2a1efe-1728-42d9-bd3d-9eb29af783e9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:39:12.432498   68617 system_pods.go:61] "storage-provisioner" [32274055-6a2d-4cf9-8ae8-3c5f7cdc66db] Running
	I0719 19:39:12.432503   68617 system_pods.go:74] duration metric: took 3.779953026s to wait for pod list to return data ...
	I0719 19:39:12.432509   68617 default_sa.go:34] waiting for default service account to be created ...
	I0719 19:39:12.434809   68617 default_sa.go:45] found service account: "default"
	I0719 19:39:12.434826   68617 default_sa.go:55] duration metric: took 2.3111ms for default service account to be created ...
	I0719 19:39:12.434833   68617 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 19:39:12.440652   68617 system_pods.go:86] 8 kube-system pods found
	I0719 19:39:12.440673   68617 system_pods.go:89] "coredns-7db6d8ff4d-7h48w" [dd691a13-e571-4148-bef0-ba0552277312] Running
	I0719 19:39:12.440678   68617 system_pods.go:89] "etcd-embed-certs-889918" [68eae3c2-11de-4c70-bc28-1f34e1e868fc] Running
	I0719 19:39:12.440683   68617 system_pods.go:89] "kube-apiserver-embed-certs-889918" [48f50846-bc9e-444e-a569-98d28550dcff] Running
	I0719 19:39:12.440689   68617 system_pods.go:89] "kube-controller-manager-embed-certs-889918" [92e4da69-acef-44a2-9301-726c70294727] Running
	I0719 19:39:12.440696   68617 system_pods.go:89] "kube-proxy-5fcgs" [67ef345b-f2ef-4e18-a55c-91e2c08a37b8] Running
	I0719 19:39:12.440707   68617 system_pods.go:89] "kube-scheduler-embed-certs-889918" [f5d74873-1493-42f7-92a7-5d2b98929025] Running
	I0719 19:39:12.440717   68617 system_pods.go:89] "metrics-server-569cc877fc-sfc85" [fc2a1efe-1728-42d9-bd3d-9eb29af783e9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:39:12.440728   68617 system_pods.go:89] "storage-provisioner" [32274055-6a2d-4cf9-8ae8-3c5f7cdc66db] Running
	I0719 19:39:12.440737   68617 system_pods.go:126] duration metric: took 5.899051ms to wait for k8s-apps to be running ...
	I0719 19:39:12.440748   68617 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 19:39:12.440789   68617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:39:12.459186   68617 system_svc.go:56] duration metric: took 18.42941ms WaitForService to wait for kubelet
	I0719 19:39:12.459214   68617 kubeadm.go:582] duration metric: took 4m23.472080305s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 19:39:12.459237   68617 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:39:12.462522   68617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:39:12.462542   68617 node_conditions.go:123] node cpu capacity is 2
	I0719 19:39:12.462552   68617 node_conditions.go:105] duration metric: took 3.31009ms to run NodePressure ...
	I0719 19:39:12.462568   68617 start.go:241] waiting for startup goroutines ...
	I0719 19:39:12.462577   68617 start.go:246] waiting for cluster config update ...
	I0719 19:39:12.462590   68617 start.go:255] writing updated cluster config ...
	I0719 19:39:12.462907   68617 ssh_runner.go:195] Run: rm -f paused
	I0719 19:39:12.529076   68617 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 19:39:12.531440   68617 out.go:177] * Done! kubectl is now configured to use "embed-certs-889918" cluster and "default" namespace by default
	I0719 19:39:08.251071   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:10.251424   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:12.252078   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:12.404491   68995 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.333989512s)
	I0719 19:39:12.404566   68995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:39:12.418839   68995 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:39:12.430297   68995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:39:12.440268   68995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:39:12.440287   68995 kubeadm.go:157] found existing configuration files:
	
	I0719 19:39:12.440335   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:39:12.449116   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:39:12.449184   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:39:12.460572   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:39:12.471305   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:39:12.471362   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:39:12.482167   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:39:12.493823   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:39:12.493905   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:39:12.505544   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:39:12.517178   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:39:12.517240   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:39:12.529271   68995 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 19:39:12.620678   68995 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0719 19:39:12.620745   68995 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 19:39:12.758666   68995 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 19:39:12.758855   68995 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 19:39:12.759023   68995 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 19:39:12.939168   68995 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 19:39:08.482667   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:10.983631   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:12.941111   68995 out.go:204]   - Generating certificates and keys ...
	I0719 19:39:12.941226   68995 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 19:39:12.941319   68995 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 19:39:12.941423   68995 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 19:39:12.941537   68995 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 19:39:12.941651   68995 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 19:39:12.941754   68995 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 19:39:12.941851   68995 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 19:39:12.941935   68995 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 19:39:12.942048   68995 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 19:39:12.942156   68995 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 19:39:12.942238   68995 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 19:39:12.942336   68995 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 19:39:13.024175   68995 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 19:39:13.148395   68995 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 19:39:13.342824   68995 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 19:39:13.440755   68995 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 19:39:13.455869   68995 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 19:39:13.456830   68995 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 19:39:13.456907   68995 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 19:39:13.600063   68995 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 19:39:13.602239   68995 out.go:204]   - Booting up control plane ...
	I0719 19:39:13.602381   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 19:39:13.608875   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 19:39:13.609791   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 19:39:13.610664   68995 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 19:39:13.612681   68995 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 19:39:14.252185   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:16.252337   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:13.484347   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:15.983299   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:17.983517   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:18.751634   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:21.252050   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:20.482791   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:21.477050   68536 pod_ready.go:81] duration metric: took 4m0.000605179s for pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace to be "Ready" ...
	E0719 19:39:21.477076   68536 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace to be "Ready" (will not retry!)
	I0719 19:39:21.477094   68536 pod_ready.go:38] duration metric: took 4m14.545379745s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:39:21.477123   68536 kubeadm.go:597] duration metric: took 5m1.629449662s to restartPrimaryControlPlane
	W0719 19:39:21.477193   68536 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 19:39:21.477215   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 19:39:23.753986   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:26.251761   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:28.252580   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:30.755898   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:33.251879   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:35.750822   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:37.752455   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:40.251762   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:42.752421   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:45.251253   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:47.752059   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:50.253519   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:51.745969   68019 pod_ready.go:81] duration metric: took 4m0.000948888s for pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace to be "Ready" ...
	E0719 19:39:51.746010   68019 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0719 19:39:51.746030   68019 pod_ready.go:38] duration metric: took 4m12.537161917s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:39:51.746056   68019 kubeadm.go:597] duration metric: took 4m19.596455979s to restartPrimaryControlPlane
	W0719 19:39:51.746106   68019 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 19:39:51.746131   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 19:39:52.520817   68536 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.043581976s)
	I0719 19:39:52.520887   68536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:39:52.536043   68536 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:39:52.545993   68536 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:39:52.555648   68536 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:39:52.555669   68536 kubeadm.go:157] found existing configuration files:
	
	I0719 19:39:52.555711   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0719 19:39:52.564602   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:39:52.564712   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:39:52.573899   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0719 19:39:52.582537   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:39:52.582588   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:39:52.591303   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0719 19:39:52.600820   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:39:52.600868   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:39:52.610312   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0719 19:39:52.618924   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:39:52.618970   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:39:52.627840   68536 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 19:39:52.683442   68536 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0719 19:39:52.683530   68536 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 19:39:52.826305   68536 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 19:39:52.826452   68536 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 19:39:52.826620   68536 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 19:39:53.042595   68536 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 19:39:53.044702   68536 out.go:204]   - Generating certificates and keys ...
	I0719 19:39:53.044784   68536 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 19:39:53.044870   68536 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 19:39:53.044980   68536 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 19:39:53.045077   68536 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 19:39:53.045156   68536 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 19:39:53.045204   68536 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 19:39:53.045263   68536 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 19:39:53.045340   68536 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 19:39:53.045435   68536 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 19:39:53.045541   68536 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 19:39:53.045606   68536 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 19:39:53.045764   68536 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 19:39:53.222737   68536 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 19:39:53.489072   68536 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 19:39:53.740066   68536 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 19:39:53.822505   68536 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 19:39:53.971335   68536 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 19:39:53.971975   68536 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 19:39:53.976245   68536 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 19:39:53.614108   68995 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0719 19:39:53.614700   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:39:53.614946   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:39:53.978300   68536 out.go:204]   - Booting up control plane ...
	I0719 19:39:53.978437   68536 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 19:39:53.978565   68536 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 19:39:53.978663   68536 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 19:39:53.996201   68536 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 19:39:53.997047   68536 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 19:39:53.997148   68536 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 19:39:54.130113   68536 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 19:39:54.130245   68536 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 19:39:55.132570   68536 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002320915s
	I0719 19:39:55.132686   68536 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 19:39:59.636140   68536 kubeadm.go:310] [api-check] The API server is healthy after 4.50221298s
	I0719 19:39:59.654383   68536 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 19:39:59.676263   68536 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 19:39:59.700938   68536 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 19:39:59.701195   68536 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-578533 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 19:39:59.712009   68536 kubeadm.go:310] [bootstrap-token] Using token: kew56y.gotqvfalggpgzbky
	I0719 19:39:59.713486   68536 out.go:204]   - Configuring RBAC rules ...
	I0719 19:39:59.713629   68536 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 19:39:59.720605   68536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 19:39:59.726901   68536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 19:39:59.730021   68536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 19:39:59.733042   68536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 19:39:59.736260   68536 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 19:40:00.042122   68536 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 19:40:00.496191   68536 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 19:40:01.043927   68536 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 19:40:01.043969   68536 kubeadm.go:310] 
	I0719 19:40:01.044072   68536 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 19:40:01.044086   68536 kubeadm.go:310] 
	I0719 19:40:01.044173   68536 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 19:40:01.044181   68536 kubeadm.go:310] 
	I0719 19:40:01.044201   68536 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 19:40:01.044250   68536 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 19:40:01.044339   68536 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 19:40:01.044356   68536 kubeadm.go:310] 
	I0719 19:40:01.044493   68536 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 19:40:01.044513   68536 kubeadm.go:310] 
	I0719 19:40:01.044584   68536 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 19:40:01.044594   68536 kubeadm.go:310] 
	I0719 19:40:01.044670   68536 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 19:40:01.044774   68536 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 19:40:01.044874   68536 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 19:40:01.044888   68536 kubeadm.go:310] 
	I0719 19:40:01.044975   68536 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 19:40:01.045047   68536 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 19:40:01.045058   68536 kubeadm.go:310] 
	I0719 19:40:01.045169   68536 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token kew56y.gotqvfalggpgzbky \
	I0719 19:40:01.045331   68536 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 \
	I0719 19:40:01.045367   68536 kubeadm.go:310] 	--control-plane 
	I0719 19:40:01.045372   68536 kubeadm.go:310] 
	I0719 19:40:01.045496   68536 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 19:40:01.045506   68536 kubeadm.go:310] 
	I0719 19:40:01.045607   68536 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token kew56y.gotqvfalggpgzbky \
	I0719 19:40:01.045766   68536 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 
	I0719 19:40:01.045950   68536 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 19:40:01.045974   68536 cni.go:84] Creating CNI manager for ""
	I0719 19:40:01.045983   68536 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:40:01.047577   68536 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 19:39:58.615471   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:39:58.615687   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:40:01.048902   68536 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 19:40:01.061165   68536 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 19:40:01.083745   68536 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 19:40:01.083880   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:01.083893   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-578533 minikube.k8s.io/updated_at=2024_07_19T19_40_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221 minikube.k8s.io/name=default-k8s-diff-port-578533 minikube.k8s.io/primary=true
	I0719 19:40:01.115489   68536 ops.go:34] apiserver oom_adj: -16
	I0719 19:40:01.288776   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:01.789023   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:02.289623   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:02.789147   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:03.289112   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:03.789167   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:04.289092   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:04.788811   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:05.289732   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:05.789486   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:06.289062   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:06.789641   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:07.289713   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:07.789100   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:08.289752   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:08.616294   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:40:08.616549   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:40:08.789060   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:09.288783   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:09.789011   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:10.289587   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:10.789028   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:11.288947   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:11.789756   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:12.288867   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:12.789023   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:13.289682   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:13.789104   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:13.872731   68536 kubeadm.go:1113] duration metric: took 12.788952534s to wait for elevateKubeSystemPrivileges
	I0719 19:40:13.872760   68536 kubeadm.go:394] duration metric: took 5m54.069849946s to StartCluster
	I0719 19:40:13.872777   68536 settings.go:142] acquiring lock: {Name:mkcf95271f2ac91a55c2f4ae0f8a0ccaa24777be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:40:13.872848   68536 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:40:13.874598   68536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/kubeconfig: {Name:mk0bbb5f0de0e816a151363ad4427bbf9de52f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:40:13.874875   68536 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.104 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 19:40:13.875049   68536 config.go:182] Loaded profile config "default-k8s-diff-port-578533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:40:13.875047   68536 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 19:40:13.875107   68536 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-578533"
	I0719 19:40:13.875116   68536 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-578533"
	I0719 19:40:13.875140   68536 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-578533"
	I0719 19:40:13.875142   68536 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-578533"
	I0719 19:40:13.875198   68536 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-578533"
	W0719 19:40:13.875211   68536 addons.go:243] addon metrics-server should already be in state true
	I0719 19:40:13.875141   68536 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-578533"
	W0719 19:40:13.875237   68536 addons.go:243] addon storage-provisioner should already be in state true
	I0719 19:40:13.875252   68536 host.go:66] Checking if "default-k8s-diff-port-578533" exists ...
	I0719 19:40:13.875272   68536 host.go:66] Checking if "default-k8s-diff-port-578533" exists ...
	I0719 19:40:13.875610   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.875620   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.875641   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.875648   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.875760   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.875808   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.876607   68536 out.go:177] * Verifying Kubernetes components...
	I0719 19:40:13.878074   68536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:40:13.891861   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44441
	I0719 19:40:13.891871   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33215
	I0719 19:40:13.892275   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.892348   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.892756   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.892776   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.892861   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.892880   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.893123   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.893188   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.893600   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.893620   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.893708   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.893741   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.895892   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33737
	I0719 19:40:13.896304   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.896825   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.896849   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.897241   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.897455   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetState
	I0719 19:40:13.901471   68536 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-578533"
	W0719 19:40:13.901492   68536 addons.go:243] addon default-storageclass should already be in state true
	I0719 19:40:13.901520   68536 host.go:66] Checking if "default-k8s-diff-port-578533" exists ...
	I0719 19:40:13.901886   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.901916   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.909214   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44955
	I0719 19:40:13.909602   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.909706   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41381
	I0719 19:40:13.910014   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.910034   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.910088   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.910307   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.910493   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetState
	I0719 19:40:13.910626   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.910647   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.911481   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.911755   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetState
	I0719 19:40:13.913629   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:40:13.914010   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:40:13.915656   68536 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0719 19:40:13.915658   68536 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:40:13.916749   68536 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 19:40:13.916766   68536 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 19:40:13.916796   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:40:13.917023   68536 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:40:13.917042   68536 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 19:40:13.917059   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:40:13.920356   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:40:13.921022   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:40:13.921051   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:40:13.921100   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:40:13.921342   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:40:13.921505   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:40:13.921690   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:40:13.921838   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:40:13.922025   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:40:13.922047   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:40:13.922369   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:40:13.922515   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:40:13.922679   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:40:13.922815   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:40:13.926929   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43701
	I0719 19:40:13.927323   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.927751   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.927771   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.928249   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.928833   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.928858   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.944683   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33103
	I0719 19:40:13.945111   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.945633   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.945660   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.946161   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.946386   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetState
	I0719 19:40:13.948243   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:40:13.948454   68536 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 19:40:13.948469   68536 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 19:40:13.948490   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:40:13.951289   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:40:13.951582   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:40:13.951617   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:40:13.951803   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:40:13.952029   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:40:13.952201   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:40:13.952391   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:40:14.070828   68536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:40:14.091145   68536 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-578533" to be "Ready" ...
	I0719 19:40:14.110080   68536 node_ready.go:49] node "default-k8s-diff-port-578533" has status "Ready":"True"
	I0719 19:40:14.110107   68536 node_ready.go:38] duration metric: took 18.918971ms for node "default-k8s-diff-port-578533" to be "Ready" ...
	I0719 19:40:14.110119   68536 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:40:14.119802   68536 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.126113   68536 pod_ready.go:92] pod "etcd-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:14.126142   68536 pod_ready.go:81] duration metric: took 6.313891ms for pod "etcd-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.126155   68536 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.141789   68536 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:14.141819   68536 pod_ready.go:81] duration metric: took 15.656544ms for pod "kube-apiserver-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.141834   68536 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.156519   68536 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:14.156551   68536 pod_ready.go:81] duration metric: took 14.707971ms for pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.156565   68536 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xd6jl" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.204678   68536 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 19:40:14.204699   68536 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0719 19:40:14.243288   68536 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 19:40:14.245353   68536 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 19:40:14.245379   68536 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 19:40:14.251097   68536 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:40:14.282349   68536 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 19:40:14.282380   68536 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 19:40:14.324904   68536 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 19:40:14.538669   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:14.538706   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:14.539085   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Closing plugin on server side
	I0719 19:40:14.539097   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:14.539112   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:14.539127   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:14.539141   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:14.539429   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:14.539450   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:14.539487   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Closing plugin on server side
	I0719 19:40:14.544896   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:14.544916   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:14.545172   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Closing plugin on server side
	I0719 19:40:14.545177   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:14.545191   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:15.063896   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:15.063917   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:15.064241   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:15.064253   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Closing plugin on server side
	I0719 19:40:15.064259   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:15.064276   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:15.064284   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:15.064494   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:15.064517   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:15.658117   68536 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.333159998s)
	I0719 19:40:15.658174   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:15.658189   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:15.658470   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:15.658491   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:15.658502   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:15.658512   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:15.658736   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:15.658753   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:15.658764   68536 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-578533"
	I0719 19:40:15.661136   68536 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0719 19:40:15.662344   68536 addons.go:510] duration metric: took 1.787285769s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0719 19:40:15.679066   68536 pod_ready.go:92] pod "kube-proxy-xd6jl" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:15.679088   68536 pod_ready.go:81] duration metric: took 1.52251532s for pod "kube-proxy-xd6jl" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:15.679114   68536 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:15.695199   68536 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:15.695221   68536 pod_ready.go:81] duration metric: took 16.099506ms for pod "kube-scheduler-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:15.695229   68536 pod_ready.go:38] duration metric: took 1.585098776s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:40:15.695241   68536 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:40:15.695297   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:40:15.746226   68536 api_server.go:72] duration metric: took 1.871305968s to wait for apiserver process to appear ...
	I0719 19:40:15.746254   68536 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:40:15.746277   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:40:15.761521   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 200:
	ok
	I0719 19:40:15.762619   68536 api_server.go:141] control plane version: v1.30.3
	I0719 19:40:15.762646   68536 api_server.go:131] duration metric: took 16.383582ms to wait for apiserver health ...
	I0719 19:40:15.762656   68536 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:40:15.900579   68536 system_pods.go:59] 9 kube-system pods found
	I0719 19:40:15.900627   68536 system_pods.go:61] "coredns-7db6d8ff4d-6qndw" [2b80cbbf-e1de-449d-affb-90c7e626e11a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 19:40:15.900639   68536 system_pods.go:61] "coredns-7db6d8ff4d-gmljj" [e3c23a75-7c98-4704-b159-465a46e71deb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 19:40:15.900648   68536 system_pods.go:61] "etcd-default-k8s-diff-port-578533" [2667fb3b-07c7-4feb-9c47-aec61d610319] Running
	I0719 19:40:15.900657   68536 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-578533" [54d5ed69-0572-4d75-b2c2-868a2d48f048] Running
	I0719 19:40:15.900664   68536 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-578533" [ba65a51a-7ce6-4ba9-ad87-b2763c49732f] Running
	I0719 19:40:15.900669   68536 system_pods.go:61] "kube-proxy-xd6jl" [699a18fd-c6e6-4c85-9799-0630bb7932b9] Running
	I0719 19:40:15.900675   68536 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-578533" [28fe9666-6e36-479a-9908-0ed40fbe9655] Running
	I0719 19:40:15.900687   68536 system_pods.go:61] "metrics-server-569cc877fc-kz2bz" [dc109926-e072-47e1-944c-a9614741fdaa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:40:15.900701   68536 system_pods.go:61] "storage-provisioner" [732e14b5-3120-491f-be0f-2d2ffcd7a799] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 19:40:15.900711   68536 system_pods.go:74] duration metric: took 138.04766ms to wait for pod list to return data ...
	I0719 19:40:15.900726   68536 default_sa.go:34] waiting for default service account to be created ...
	I0719 19:40:16.094864   68536 default_sa.go:45] found service account: "default"
	I0719 19:40:16.094889   68536 default_sa.go:55] duration metric: took 194.152947ms for default service account to be created ...
	I0719 19:40:16.094899   68536 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 19:40:16.297477   68536 system_pods.go:86] 9 kube-system pods found
	I0719 19:40:16.297508   68536 system_pods.go:89] "coredns-7db6d8ff4d-6qndw" [2b80cbbf-e1de-449d-affb-90c7e626e11a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 19:40:16.297515   68536 system_pods.go:89] "coredns-7db6d8ff4d-gmljj" [e3c23a75-7c98-4704-b159-465a46e71deb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 19:40:16.297521   68536 system_pods.go:89] "etcd-default-k8s-diff-port-578533" [2667fb3b-07c7-4feb-9c47-aec61d610319] Running
	I0719 19:40:16.297526   68536 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-578533" [54d5ed69-0572-4d75-b2c2-868a2d48f048] Running
	I0719 19:40:16.297531   68536 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-578533" [ba65a51a-7ce6-4ba9-ad87-b2763c49732f] Running
	I0719 19:40:16.297535   68536 system_pods.go:89] "kube-proxy-xd6jl" [699a18fd-c6e6-4c85-9799-0630bb7932b9] Running
	I0719 19:40:16.297539   68536 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-578533" [28fe9666-6e36-479a-9908-0ed40fbe9655] Running
	I0719 19:40:16.297543   68536 system_pods.go:89] "metrics-server-569cc877fc-kz2bz" [dc109926-e072-47e1-944c-a9614741fdaa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:40:16.297550   68536 system_pods.go:89] "storage-provisioner" [732e14b5-3120-491f-be0f-2d2ffcd7a799] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 19:40:16.297572   68536 retry.go:31] will retry after 294.572454ms: missing components: kube-dns
	I0719 19:40:16.599274   68536 system_pods.go:86] 9 kube-system pods found
	I0719 19:40:16.599306   68536 system_pods.go:89] "coredns-7db6d8ff4d-6qndw" [2b80cbbf-e1de-449d-affb-90c7e626e11a] Running
	I0719 19:40:16.599315   68536 system_pods.go:89] "coredns-7db6d8ff4d-gmljj" [e3c23a75-7c98-4704-b159-465a46e71deb] Running
	I0719 19:40:16.599343   68536 system_pods.go:89] "etcd-default-k8s-diff-port-578533" [2667fb3b-07c7-4feb-9c47-aec61d610319] Running
	I0719 19:40:16.599351   68536 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-578533" [54d5ed69-0572-4d75-b2c2-868a2d48f048] Running
	I0719 19:40:16.599358   68536 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-578533" [ba65a51a-7ce6-4ba9-ad87-b2763c49732f] Running
	I0719 19:40:16.599365   68536 system_pods.go:89] "kube-proxy-xd6jl" [699a18fd-c6e6-4c85-9799-0630bb7932b9] Running
	I0719 19:40:16.599371   68536 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-578533" [28fe9666-6e36-479a-9908-0ed40fbe9655] Running
	I0719 19:40:16.599386   68536 system_pods.go:89] "metrics-server-569cc877fc-kz2bz" [dc109926-e072-47e1-944c-a9614741fdaa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:40:16.599392   68536 system_pods.go:89] "storage-provisioner" [732e14b5-3120-491f-be0f-2d2ffcd7a799] Running
	I0719 19:40:16.599408   68536 system_pods.go:126] duration metric: took 504.501449ms to wait for k8s-apps to be running ...
	I0719 19:40:16.599418   68536 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 19:40:16.599461   68536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:40:16.613644   68536 system_svc.go:56] duration metric: took 14.216133ms WaitForService to wait for kubelet
	I0719 19:40:16.613673   68536 kubeadm.go:582] duration metric: took 2.73876141s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 19:40:16.613691   68536 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:40:16.695470   68536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:40:16.695494   68536 node_conditions.go:123] node cpu capacity is 2
	I0719 19:40:16.695503   68536 node_conditions.go:105] duration metric: took 81.807505ms to run NodePressure ...
	I0719 19:40:16.695513   68536 start.go:241] waiting for startup goroutines ...
	I0719 19:40:16.695523   68536 start.go:246] waiting for cluster config update ...
	I0719 19:40:16.695533   68536 start.go:255] writing updated cluster config ...
	I0719 19:40:16.695817   68536 ssh_runner.go:195] Run: rm -f paused
	I0719 19:40:16.743733   68536 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 19:40:16.746063   68536 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-578533" cluster and "default" namespace by default
	I0719 19:40:17.805890   68019 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.059736517s)
	I0719 19:40:17.805973   68019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:40:17.821097   68019 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:40:17.831114   68019 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:40:17.840896   68019 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:40:17.840926   68019 kubeadm.go:157] found existing configuration files:
	
	I0719 19:40:17.840974   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:40:17.850003   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:40:17.850059   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:40:17.859304   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:40:17.868663   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:40:17.868711   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:40:17.877527   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:40:17.886554   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:40:17.886608   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:40:17.895713   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:40:17.904287   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:40:17.904352   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:40:17.913115   68019 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 19:40:17.956895   68019 kubeadm.go:310] W0719 19:40:17.928980    2926 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0719 19:40:17.958338   68019 kubeadm.go:310] W0719 19:40:17.930488    2926 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0719 19:40:18.073705   68019 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 19:40:26.630439   68019 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0719 19:40:26.630525   68019 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 19:40:26.630649   68019 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 19:40:26.630765   68019 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 19:40:26.630913   68019 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0719 19:40:26.631023   68019 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 19:40:26.632660   68019 out.go:204]   - Generating certificates and keys ...
	I0719 19:40:26.632745   68019 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 19:40:26.632833   68019 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 19:40:26.632906   68019 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 19:40:26.632975   68019 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 19:40:26.633093   68019 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 19:40:26.633184   68019 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 19:40:26.633270   68019 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 19:40:26.633378   68019 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 19:40:26.633474   68019 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 19:40:26.633572   68019 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 19:40:26.633627   68019 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 19:40:26.633705   68019 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 19:40:26.633780   68019 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 19:40:26.633868   68019 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 19:40:26.633942   68019 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 19:40:26.634024   68019 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 19:40:26.634119   68019 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 19:40:26.634250   68019 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 19:40:26.634354   68019 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 19:40:26.635957   68019 out.go:204]   - Booting up control plane ...
	I0719 19:40:26.636062   68019 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 19:40:26.636163   68019 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 19:40:26.636259   68019 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 19:40:26.636381   68019 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 19:40:26.636505   68019 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 19:40:26.636570   68019 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 19:40:26.636727   68019 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 19:40:26.636813   68019 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 19:40:26.636888   68019 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002206871s
	I0719 19:40:26.637000   68019 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 19:40:26.637088   68019 kubeadm.go:310] [api-check] The API server is healthy after 5.002825608s
	I0719 19:40:26.637239   68019 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 19:40:26.637387   68019 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 19:40:26.637470   68019 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 19:40:26.637681   68019 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-971041 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 19:40:26.637737   68019 kubeadm.go:310] [bootstrap-token] Using token: r7dsc0.erulzfzieqtgxnlo
	I0719 19:40:26.638982   68019 out.go:204]   - Configuring RBAC rules ...
	I0719 19:40:26.639079   68019 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 19:40:26.639165   68019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 19:40:26.639306   68019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 19:40:26.639427   68019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 19:40:26.639522   68019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 19:40:26.639599   68019 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 19:40:26.639692   68019 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 19:40:26.639731   68019 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 19:40:26.639770   68019 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 19:40:26.639779   68019 kubeadm.go:310] 
	I0719 19:40:26.639859   68019 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 19:40:26.639868   68019 kubeadm.go:310] 
	I0719 19:40:26.639935   68019 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 19:40:26.639941   68019 kubeadm.go:310] 
	I0719 19:40:26.639974   68019 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 19:40:26.640070   68019 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 19:40:26.640145   68019 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 19:40:26.640161   68019 kubeadm.go:310] 
	I0719 19:40:26.640242   68019 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 19:40:26.640249   68019 kubeadm.go:310] 
	I0719 19:40:26.640288   68019 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 19:40:26.640303   68019 kubeadm.go:310] 
	I0719 19:40:26.640386   68019 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 19:40:26.640476   68019 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 19:40:26.640572   68019 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 19:40:26.640582   68019 kubeadm.go:310] 
	I0719 19:40:26.640655   68019 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 19:40:26.640733   68019 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 19:40:26.640745   68019 kubeadm.go:310] 
	I0719 19:40:26.640827   68019 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token r7dsc0.erulzfzieqtgxnlo \
	I0719 19:40:26.640965   68019 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 \
	I0719 19:40:26.641007   68019 kubeadm.go:310] 	--control-plane 
	I0719 19:40:26.641015   68019 kubeadm.go:310] 
	I0719 19:40:26.641105   68019 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 19:40:26.641117   68019 kubeadm.go:310] 
	I0719 19:40:26.641187   68019 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token r7dsc0.erulzfzieqtgxnlo \
	I0719 19:40:26.641282   68019 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 
	I0719 19:40:26.641293   68019 cni.go:84] Creating CNI manager for ""
	I0719 19:40:26.641302   68019 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:40:26.642788   68019 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 19:40:26.643983   68019 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 19:40:26.655231   68019 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 19:40:26.673487   68019 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 19:40:26.673555   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:26.673573   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-971041 minikube.k8s.io/updated_at=2024_07_19T19_40_26_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221 minikube.k8s.io/name=no-preload-971041 minikube.k8s.io/primary=true
	I0719 19:40:26.702916   68019 ops.go:34] apiserver oom_adj: -16
	I0719 19:40:26.871642   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:27.372636   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:27.871695   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:28.372395   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:28.871975   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:29.371869   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:29.872364   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:30.372618   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:30.871974   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:31.372607   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:31.484697   68019 kubeadm.go:1113] duration metric: took 4.811203988s to wait for elevateKubeSystemPrivileges
	I0719 19:40:31.484735   68019 kubeadm.go:394] duration metric: took 4m59.38336524s to StartCluster
	I0719 19:40:31.484758   68019 settings.go:142] acquiring lock: {Name:mkcf95271f2ac91a55c2f4ae0f8a0ccaa24777be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:40:31.484852   68019 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:40:31.487571   68019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/kubeconfig: {Name:mk0bbb5f0de0e816a151363ad4427bbf9de52f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:40:31.487903   68019 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.119 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 19:40:31.488046   68019 config.go:182] Loaded profile config "no-preload-971041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 19:40:31.488017   68019 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 19:40:31.488091   68019 addons.go:69] Setting storage-provisioner=true in profile "no-preload-971041"
	I0719 19:40:31.488105   68019 addons.go:69] Setting default-storageclass=true in profile "no-preload-971041"
	I0719 19:40:31.488136   68019 addons.go:234] Setting addon storage-provisioner=true in "no-preload-971041"
	I0719 19:40:31.488136   68019 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-971041"
	W0719 19:40:31.488145   68019 addons.go:243] addon storage-provisioner should already be in state true
	I0719 19:40:31.488173   68019 host.go:66] Checking if "no-preload-971041" exists ...
	I0719 19:40:31.488097   68019 addons.go:69] Setting metrics-server=true in profile "no-preload-971041"
	I0719 19:40:31.488220   68019 addons.go:234] Setting addon metrics-server=true in "no-preload-971041"
	W0719 19:40:31.488229   68019 addons.go:243] addon metrics-server should already be in state true
	I0719 19:40:31.488253   68019 host.go:66] Checking if "no-preload-971041" exists ...
	I0719 19:40:31.488560   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.488566   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.488568   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.488588   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.488589   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.488590   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.489333   68019 out.go:177] * Verifying Kubernetes components...
	I0719 19:40:31.490709   68019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:40:31.505073   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35949
	I0719 19:40:31.505121   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36213
	I0719 19:40:31.505243   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34987
	I0719 19:40:31.505554   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.505575   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.505651   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.506136   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.506161   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.506138   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.506177   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.506369   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.506390   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.506521   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.506582   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.506762   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.506950   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetState
	I0719 19:40:31.507074   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.507101   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.507097   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.507134   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.510086   68019 addons.go:234] Setting addon default-storageclass=true in "no-preload-971041"
	W0719 19:40:31.510104   68019 addons.go:243] addon default-storageclass should already be in state true
	I0719 19:40:31.510154   68019 host.go:66] Checking if "no-preload-971041" exists ...
	I0719 19:40:31.510697   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.510727   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.523120   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36047
	I0719 19:40:31.523463   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45285
	I0719 19:40:31.523899   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.523950   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.524463   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.524479   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.524580   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.524590   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.524931   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.524973   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.525131   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetState
	I0719 19:40:31.525164   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetState
	I0719 19:40:31.526929   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:40:31.527498   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:40:31.527764   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42975
	I0719 19:40:31.528531   68019 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0719 19:40:31.528562   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.529323   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.529340   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.529389   68019 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:40:28.617451   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:40:28.617640   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:40:31.529719   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.530357   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.530369   68019 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 19:40:31.530383   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.530386   68019 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 19:40:31.530405   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:40:31.531478   68019 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:40:31.531494   68019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 19:40:31.531509   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:40:31.533843   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:40:31.534251   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:40:31.534279   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:40:31.534542   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:40:31.534701   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:40:31.534827   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:40:31.534968   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:40:31.538822   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:40:31.539296   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:40:31.539468   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:40:31.539598   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:40:31.539745   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:40:31.539894   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:40:31.540051   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:40:31.547930   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40325
	I0719 19:40:31.548415   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.548833   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.548852   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.549151   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.549309   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetState
	I0719 19:40:31.550633   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:40:31.550855   68019 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 19:40:31.550869   68019 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 19:40:31.550881   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:40:31.553528   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:40:31.554109   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:40:31.554134   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:40:31.554298   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:40:31.554650   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:40:31.554842   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:40:31.554962   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:40:31.716869   68019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:40:31.740991   68019 node_ready.go:35] waiting up to 6m0s for node "no-preload-971041" to be "Ready" ...
	I0719 19:40:31.749672   68019 node_ready.go:49] node "no-preload-971041" has status "Ready":"True"
	I0719 19:40:31.749700   68019 node_ready.go:38] duration metric: took 8.681441ms for node "no-preload-971041" to be "Ready" ...
	I0719 19:40:31.749713   68019 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:40:31.755037   68019 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-7kdqd" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:31.865867   68019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:40:31.866793   68019 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 19:40:31.866807   68019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0719 19:40:31.876604   68019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 19:40:31.890924   68019 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 19:40:31.890947   68019 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 19:40:31.930299   68019 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 19:40:31.930326   68019 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 19:40:31.982650   68019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 19:40:32.488961   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:32.488991   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:32.489025   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:32.489051   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:32.489355   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:32.489372   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:32.489382   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:32.489390   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:32.489398   68019 main.go:141] libmachine: (no-preload-971041) DBG | Closing plugin on server side
	I0719 19:40:32.489424   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:32.489431   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:32.489439   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:32.489446   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:32.489619   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:32.489639   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:32.489676   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:32.489689   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:32.489690   68019 main.go:141] libmachine: (no-preload-971041) DBG | Closing plugin on server side
	I0719 19:40:32.506595   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:32.506613   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:32.506974   68019 main.go:141] libmachine: (no-preload-971041) DBG | Closing plugin on server side
	I0719 19:40:32.506985   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:32.507001   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:33.019915   68019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.037215511s)
	I0719 19:40:33.019974   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:33.019991   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:33.020393   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:33.020409   68019 main.go:141] libmachine: (no-preload-971041) DBG | Closing plugin on server side
	I0719 19:40:33.020415   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:33.020433   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:33.020442   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:33.020806   68019 main.go:141] libmachine: (no-preload-971041) DBG | Closing plugin on server side
	I0719 19:40:33.020860   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:33.020875   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:33.020893   68019 addons.go:475] Verifying addon metrics-server=true in "no-preload-971041"
	I0719 19:40:33.023369   68019 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0719 19:40:33.024402   68019 addons.go:510] duration metric: took 1.536389874s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0719 19:40:33.763711   68019 pod_ready.go:102] pod "coredns-5cfdc65f69-7kdqd" in "kube-system" namespace has status "Ready":"False"
	I0719 19:40:36.262760   68019 pod_ready.go:102] pod "coredns-5cfdc65f69-7kdqd" in "kube-system" namespace has status "Ready":"False"
	I0719 19:40:38.761973   68019 pod_ready.go:102] pod "coredns-5cfdc65f69-7kdqd" in "kube-system" namespace has status "Ready":"False"
	I0719 19:40:39.261665   68019 pod_ready.go:92] pod "coredns-5cfdc65f69-7kdqd" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:39.261686   68019 pod_ready.go:81] duration metric: took 7.506626451s for pod "coredns-5cfdc65f69-7kdqd" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.261696   68019 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-d5xr6" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.268952   68019 pod_ready.go:92] pod "coredns-5cfdc65f69-d5xr6" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:39.268968   68019 pod_ready.go:81] duration metric: took 7.267035ms for pod "coredns-5cfdc65f69-d5xr6" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.268976   68019 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.274245   68019 pod_ready.go:92] pod "etcd-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:39.274263   68019 pod_ready.go:81] duration metric: took 5.279919ms for pod "etcd-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.274273   68019 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.278452   68019 pod_ready.go:92] pod "kube-apiserver-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:39.278469   68019 pod_ready.go:81] duration metric: took 4.189107ms for pod "kube-apiserver-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.278479   68019 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:40.285769   68019 pod_ready.go:92] pod "kube-controller-manager-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:40.285797   68019 pod_ready.go:81] duration metric: took 1.007309472s for pod "kube-controller-manager-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:40.285814   68019 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-445wh" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:40.459001   68019 pod_ready.go:92] pod "kube-proxy-445wh" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:40.459024   68019 pod_ready.go:81] duration metric: took 173.203584ms for pod "kube-proxy-445wh" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:40.459033   68019 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:40.858861   68019 pod_ready.go:92] pod "kube-scheduler-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:40.858890   68019 pod_ready.go:81] duration metric: took 399.849812ms for pod "kube-scheduler-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:40.858901   68019 pod_ready.go:38] duration metric: took 9.109176659s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:40:40.858919   68019 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:40:40.858980   68019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:40:40.873763   68019 api_server.go:72] duration metric: took 9.385790935s to wait for apiserver process to appear ...
	I0719 19:40:40.873791   68019 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:40:40.873813   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:40:40.878709   68019 api_server.go:279] https://192.168.50.119:8443/healthz returned 200:
	ok
	I0719 19:40:40.879656   68019 api_server.go:141] control plane version: v1.31.0-beta.0
	I0719 19:40:40.879677   68019 api_server.go:131] duration metric: took 5.8793ms to wait for apiserver health ...
	I0719 19:40:40.879686   68019 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:40:41.062525   68019 system_pods.go:59] 9 kube-system pods found
	I0719 19:40:41.062553   68019 system_pods.go:61] "coredns-5cfdc65f69-7kdqd" [1b0eeec5-f993-4d90-a3e0-5254bcc39f64] Running
	I0719 19:40:41.062561   68019 system_pods.go:61] "coredns-5cfdc65f69-d5xr6" [5f5a96c3-84ef-429c-b31d-472fcc247862] Running
	I0719 19:40:41.062565   68019 system_pods.go:61] "etcd-no-preload-971041" [8852dc62-d441-43c0-97e5-6fb333ee3d26] Running
	I0719 19:40:41.062569   68019 system_pods.go:61] "kube-apiserver-no-preload-971041" [083b50e1-e9fa-4e4b-a684-a6d461c7b45e] Running
	I0719 19:40:41.062573   68019 system_pods.go:61] "kube-controller-manager-no-preload-971041" [45ee5801-3b8a-4072-a4c6-716525a06d87] Running
	I0719 19:40:41.062576   68019 system_pods.go:61] "kube-proxy-445wh" [468d052a-a8d2-47f6-9054-813dd3ee33ee] Running
	I0719 19:40:41.062579   68019 system_pods.go:61] "kube-scheduler-no-preload-971041" [3989f412-bfc8-41c2-a9df-ef2b72f95f37] Running
	I0719 19:40:41.062584   68019 system_pods.go:61] "metrics-server-78fcd8795b-r6hdb" [13d26205-03d4-4719-bfa0-199a2e23b52a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:40:41.062588   68019 system_pods.go:61] "storage-provisioner" [fedb9d1a-139b-4627-8efa-64dd1481d8d4] Running
	I0719 19:40:41.062595   68019 system_pods.go:74] duration metric: took 182.903201ms to wait for pod list to return data ...
	I0719 19:40:41.062601   68019 default_sa.go:34] waiting for default service account to be created ...
	I0719 19:40:41.259408   68019 default_sa.go:45] found service account: "default"
	I0719 19:40:41.259432   68019 default_sa.go:55] duration metric: took 196.824864ms for default service account to be created ...
	I0719 19:40:41.259440   68019 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 19:40:41.462209   68019 system_pods.go:86] 9 kube-system pods found
	I0719 19:40:41.462243   68019 system_pods.go:89] "coredns-5cfdc65f69-7kdqd" [1b0eeec5-f993-4d90-a3e0-5254bcc39f64] Running
	I0719 19:40:41.462251   68019 system_pods.go:89] "coredns-5cfdc65f69-d5xr6" [5f5a96c3-84ef-429c-b31d-472fcc247862] Running
	I0719 19:40:41.462258   68019 system_pods.go:89] "etcd-no-preload-971041" [8852dc62-d441-43c0-97e5-6fb333ee3d26] Running
	I0719 19:40:41.462264   68019 system_pods.go:89] "kube-apiserver-no-preload-971041" [083b50e1-e9fa-4e4b-a684-a6d461c7b45e] Running
	I0719 19:40:41.462271   68019 system_pods.go:89] "kube-controller-manager-no-preload-971041" [45ee5801-3b8a-4072-a4c6-716525a06d87] Running
	I0719 19:40:41.462276   68019 system_pods.go:89] "kube-proxy-445wh" [468d052a-a8d2-47f6-9054-813dd3ee33ee] Running
	I0719 19:40:41.462282   68019 system_pods.go:89] "kube-scheduler-no-preload-971041" [3989f412-bfc8-41c2-a9df-ef2b72f95f37] Running
	I0719 19:40:41.462291   68019 system_pods.go:89] "metrics-server-78fcd8795b-r6hdb" [13d26205-03d4-4719-bfa0-199a2e23b52a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:40:41.462295   68019 system_pods.go:89] "storage-provisioner" [fedb9d1a-139b-4627-8efa-64dd1481d8d4] Running
	I0719 19:40:41.462304   68019 system_pods.go:126] duration metric: took 202.857782ms to wait for k8s-apps to be running ...
	I0719 19:40:41.462312   68019 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 19:40:41.462377   68019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:40:41.478453   68019 system_svc.go:56] duration metric: took 16.131704ms WaitForService to wait for kubelet
	I0719 19:40:41.478487   68019 kubeadm.go:582] duration metric: took 9.990516808s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 19:40:41.478510   68019 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:40:41.660126   68019 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:40:41.660154   68019 node_conditions.go:123] node cpu capacity is 2
	I0719 19:40:41.660165   68019 node_conditions.go:105] duration metric: took 181.649686ms to run NodePressure ...
	I0719 19:40:41.660176   68019 start.go:241] waiting for startup goroutines ...
	I0719 19:40:41.660182   68019 start.go:246] waiting for cluster config update ...
	I0719 19:40:41.660192   68019 start.go:255] writing updated cluster config ...
	I0719 19:40:41.660455   68019 ssh_runner.go:195] Run: rm -f paused
	I0719 19:40:41.707917   68019 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0719 19:40:41.710102   68019 out.go:177] * Done! kubectl is now configured to use "no-preload-971041" cluster and "default" namespace by default
	I0719 19:41:08.619807   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:41:08.620131   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:41:08.620145   68995 kubeadm.go:310] 
	I0719 19:41:08.620196   68995 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0719 19:41:08.620252   68995 kubeadm.go:310] 		timed out waiting for the condition
	I0719 19:41:08.620262   68995 kubeadm.go:310] 
	I0719 19:41:08.620302   68995 kubeadm.go:310] 	This error is likely caused by:
	I0719 19:41:08.620401   68995 kubeadm.go:310] 		- The kubelet is not running
	I0719 19:41:08.620615   68995 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0719 19:41:08.620632   68995 kubeadm.go:310] 
	I0719 19:41:08.620765   68995 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0719 19:41:08.620824   68995 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0719 19:41:08.620872   68995 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0719 19:41:08.620882   68995 kubeadm.go:310] 
	I0719 19:41:08.621033   68995 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0719 19:41:08.621168   68995 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0719 19:41:08.621179   68995 kubeadm.go:310] 
	I0719 19:41:08.621349   68995 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0719 19:41:08.621469   68995 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0719 19:41:08.621576   68995 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0719 19:41:08.621664   68995 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0719 19:41:08.621677   68995 kubeadm.go:310] 
	I0719 19:41:08.622038   68995 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 19:41:08.622184   68995 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0719 19:41:08.622310   68995 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0719 19:41:08.622452   68995 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0719 19:41:08.622521   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 19:41:09.080325   68995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:41:09.095325   68995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:41:09.104753   68995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:41:09.104771   68995 kubeadm.go:157] found existing configuration files:
	
	I0719 19:41:09.104818   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:41:09.113315   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:41:09.113375   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:41:09.122025   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:41:09.130333   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:41:09.130390   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:41:09.139159   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:41:09.147867   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:41:09.147913   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:41:09.156426   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:41:09.165249   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:41:09.165299   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:41:09.174271   68995 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 19:41:09.379598   68995 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 19:43:05.534759   68995 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0719 19:43:05.534861   68995 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0719 19:43:05.536588   68995 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0719 19:43:05.536645   68995 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 19:43:05.536728   68995 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 19:43:05.536857   68995 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 19:43:05.536952   68995 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 19:43:05.537043   68995 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 19:43:05.538861   68995 out.go:204]   - Generating certificates and keys ...
	I0719 19:43:05.538966   68995 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 19:43:05.539081   68995 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 19:43:05.539168   68995 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 19:43:05.539245   68995 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 19:43:05.539349   68995 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 19:43:05.539395   68995 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 19:43:05.539453   68995 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 19:43:05.539504   68995 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 19:43:05.539578   68995 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 19:43:05.539668   68995 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 19:43:05.539716   68995 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 19:43:05.539763   68995 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 19:43:05.539812   68995 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 19:43:05.539885   68995 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 19:43:05.539968   68995 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 19:43:05.540022   68995 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 19:43:05.540135   68995 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 19:43:05.540239   68995 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 19:43:05.540291   68995 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 19:43:05.540376   68995 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 19:43:05.542005   68995 out.go:204]   - Booting up control plane ...
	I0719 19:43:05.542089   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 19:43:05.542159   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 19:43:05.542219   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 19:43:05.542290   68995 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 19:43:05.542442   68995 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 19:43:05.542510   68995 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0719 19:43:05.542610   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:43:05.542826   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:43:05.542939   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:43:05.543171   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:43:05.543252   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:43:05.543470   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:43:05.543578   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:43:05.543759   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:43:05.543891   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:43:05.544130   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:43:05.544144   68995 kubeadm.go:310] 
	I0719 19:43:05.544185   68995 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0719 19:43:05.544237   68995 kubeadm.go:310] 		timed out waiting for the condition
	I0719 19:43:05.544252   68995 kubeadm.go:310] 
	I0719 19:43:05.544286   68995 kubeadm.go:310] 	This error is likely caused by:
	I0719 19:43:05.544316   68995 kubeadm.go:310] 		- The kubelet is not running
	I0719 19:43:05.544416   68995 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0719 19:43:05.544423   68995 kubeadm.go:310] 
	I0719 19:43:05.544574   68995 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0719 19:43:05.544628   68995 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0719 19:43:05.544675   68995 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0719 19:43:05.544683   68995 kubeadm.go:310] 
	I0719 19:43:05.544816   68995 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0719 19:43:05.544917   68995 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0719 19:43:05.544926   68995 kubeadm.go:310] 
	I0719 19:43:05.545025   68995 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0719 19:43:05.545099   68995 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0719 19:43:05.545183   68995 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0719 19:43:05.545275   68995 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0719 19:43:05.545312   68995 kubeadm.go:310] 
	I0719 19:43:05.545346   68995 kubeadm.go:394] duration metric: took 8m2.050412966s to StartCluster
	I0719 19:43:05.545390   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:43:05.545442   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:43:05.585066   68995 cri.go:89] found id: ""
	I0719 19:43:05.585101   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.585114   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:43:05.585123   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:43:05.585197   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:43:05.619270   68995 cri.go:89] found id: ""
	I0719 19:43:05.619298   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.619308   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:43:05.619316   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:43:05.619378   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:43:05.652886   68995 cri.go:89] found id: ""
	I0719 19:43:05.652924   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.652935   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:43:05.652943   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:43:05.653004   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:43:05.687430   68995 cri.go:89] found id: ""
	I0719 19:43:05.687464   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.687475   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:43:05.687482   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:43:05.687542   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:43:05.724176   68995 cri.go:89] found id: ""
	I0719 19:43:05.724207   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.724218   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:43:05.724226   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:43:05.724286   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:43:05.757047   68995 cri.go:89] found id: ""
	I0719 19:43:05.757070   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.757077   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:43:05.757083   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:43:05.757131   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:43:05.799705   68995 cri.go:89] found id: ""
	I0719 19:43:05.799733   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.799743   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:43:05.799751   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:43:05.799811   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:43:05.844622   68995 cri.go:89] found id: ""
	I0719 19:43:05.844647   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.844655   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:43:05.844665   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:43:05.844676   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:43:05.962843   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:43:05.962880   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:43:06.010157   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:43:06.010191   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:43:06.060738   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:43:06.060771   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:43:06.075609   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:43:06.075650   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:43:06.155814   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0719 19:43:06.155872   68995 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0719 19:43:06.155906   68995 out.go:239] * 
	W0719 19:43:06.155980   68995 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0719 19:43:06.156013   68995 out.go:239] * 
	W0719 19:43:06.156982   68995 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 19:43:06.160611   68995 out.go:177] 
	W0719 19:43:06.161797   68995 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0719 19:43:06.161854   68995 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0719 19:43:06.161873   68995 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0719 19:43:06.163354   68995 out.go:177] 
	
	
	==> CRI-O <==
	Jul 19 19:49:43 no-preload-971041 crio[727]: time="2024-07-19 19:49:43.686335108Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721418583686314276,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2db490bf-89fb-457c-8107-a576d42a4bbd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:49:43 no-preload-971041 crio[727]: time="2024-07-19 19:49:43.687081175Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8a61e993-bde4-410f-8be0-600361e5dcfe name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:49:43 no-preload-971041 crio[727]: time="2024-07-19 19:49:43.687137385Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8a61e993-bde4-410f-8be0-600361e5dcfe name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:49:43 no-preload-971041 crio[727]: time="2024-07-19 19:49:43.687328653Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4026f272ae2afbd759a2eead22c7ba19769bf5996b34bc2a300786ea12071bf,PodSandboxId:09dac1725945b48e94edb63c465b1519ae04965dc88460932fc9f1b97dec7794,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721418032997486470,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fedb9d1a-139b-4627-8efa-64dd1481d8d4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bd028fbdc83b77e87fff78ccc73a5bbe9f449056bce3b02a01f07266aa177cd,PodSandboxId:b543f5028c7b46b286a2fcc1827b77f412d70c8a609dd9af447f9e3fbac92d11,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721418032909002159,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-7kdqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b0eeec5-f993-4d90-a3e0-5254bcc39f64,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:320e4f1c648899d65709fe2b59481bf622dac880587ad684553a7dfb62278509,PodSandboxId:112d13271d54bf2abe4d3b208c3c0bd17d9af1237ee2e3ca26d44d9648e711f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721418032760510232,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-d5xr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f
5a96c3-84ef-429c-b31d-472fcc247862,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143b1d70f2200d56bbf9db465436c00dc0c280f563a16bda424a5482a45e9b5,PodSandboxId:ec119dcaf5f52dd347a19075df89aab7fa8c99567508a8b165d90bbf0233ce7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721418031411794224,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-445wh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 468d052a-a8d2-47f6-9054-813dd3ee33ee,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d1093d162109f27dc8b8cdbda430ea0f0089db5cdc9582fbba6c9f62db6ebe0,PodSandboxId:27241a85a80e07d726d2e27488ff9698351c76e51cd6554bcca010f09b41c300,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721418020630468660,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 204ea3c3d8c5129c562f4a760458c051,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:007fed8712ef2f9d32924f70d3f28121bc2d8aaaf3f7a1073545252fe4aaab89,PodSandboxId:33f1d8ea18a503178d8c13dd23af9b4b5c4ce1846556a6bdca7716c512ca13ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721418020537823130,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb080b6fd04c15f65640b834bf6dffcd,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4bc3716261bce674e7f77a1d8e7872ff35c8557bed7653afde8c882698ffd8,PodSandboxId:010a590647984114e3de579b710f60bc66df7d6ff6460d989071c28426f9b882,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721418020526868452,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f08d7fe60cc6a65a21ef74a31a923766,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c630159ccd425562f552331464be0b4217958f3a1a9cc41403dcd4ea6840e2,PodSandboxId:38c2e5a5bf35048fc50640c4dd17109cbbde7ae04179abb0cc36112e9f94032a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721418020467174975,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb5f2073312f2003bff042479372294c,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49caf99a342cff7f9ed1c959f09529dd2f983f2268ec2b5947380d81ed812e95,PodSandboxId:f0e74b3e6aae6abed27d6900acb8d2f63d924020bd6fae3d87b118536bd8941f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721417734430001969,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f08d7fe60cc6a65a21ef74a31a923766,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8a61e993-bde4-410f-8be0-600361e5dcfe name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:49:43 no-preload-971041 crio[727]: time="2024-07-19 19:49:43.727664477Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=23f976f4-1e31-4f09-9e06-c25f7ae4a9db name=/runtime.v1.RuntimeService/Version
	Jul 19 19:49:43 no-preload-971041 crio[727]: time="2024-07-19 19:49:43.727783624Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=23f976f4-1e31-4f09-9e06-c25f7ae4a9db name=/runtime.v1.RuntimeService/Version
	Jul 19 19:49:43 no-preload-971041 crio[727]: time="2024-07-19 19:49:43.728833374Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=37bfc7c9-80de-4b82-a18b-24a6bc2ae17e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:49:43 no-preload-971041 crio[727]: time="2024-07-19 19:49:43.729314771Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721418583729292840,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=37bfc7c9-80de-4b82-a18b-24a6bc2ae17e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:49:43 no-preload-971041 crio[727]: time="2024-07-19 19:49:43.729819915Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3ab79b97-19ed-4169-8ee2-ff0d7bfd102e name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:49:43 no-preload-971041 crio[727]: time="2024-07-19 19:49:43.729872449Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3ab79b97-19ed-4169-8ee2-ff0d7bfd102e name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:49:43 no-preload-971041 crio[727]: time="2024-07-19 19:49:43.730068356Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4026f272ae2afbd759a2eead22c7ba19769bf5996b34bc2a300786ea12071bf,PodSandboxId:09dac1725945b48e94edb63c465b1519ae04965dc88460932fc9f1b97dec7794,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721418032997486470,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fedb9d1a-139b-4627-8efa-64dd1481d8d4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bd028fbdc83b77e87fff78ccc73a5bbe9f449056bce3b02a01f07266aa177cd,PodSandboxId:b543f5028c7b46b286a2fcc1827b77f412d70c8a609dd9af447f9e3fbac92d11,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721418032909002159,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-7kdqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b0eeec5-f993-4d90-a3e0-5254bcc39f64,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:320e4f1c648899d65709fe2b59481bf622dac880587ad684553a7dfb62278509,PodSandboxId:112d13271d54bf2abe4d3b208c3c0bd17d9af1237ee2e3ca26d44d9648e711f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721418032760510232,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-d5xr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f
5a96c3-84ef-429c-b31d-472fcc247862,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143b1d70f2200d56bbf9db465436c00dc0c280f563a16bda424a5482a45e9b5,PodSandboxId:ec119dcaf5f52dd347a19075df89aab7fa8c99567508a8b165d90bbf0233ce7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721418031411794224,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-445wh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 468d052a-a8d2-47f6-9054-813dd3ee33ee,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d1093d162109f27dc8b8cdbda430ea0f0089db5cdc9582fbba6c9f62db6ebe0,PodSandboxId:27241a85a80e07d726d2e27488ff9698351c76e51cd6554bcca010f09b41c300,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721418020630468660,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 204ea3c3d8c5129c562f4a760458c051,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:007fed8712ef2f9d32924f70d3f28121bc2d8aaaf3f7a1073545252fe4aaab89,PodSandboxId:33f1d8ea18a503178d8c13dd23af9b4b5c4ce1846556a6bdca7716c512ca13ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721418020537823130,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb080b6fd04c15f65640b834bf6dffcd,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4bc3716261bce674e7f77a1d8e7872ff35c8557bed7653afde8c882698ffd8,PodSandboxId:010a590647984114e3de579b710f60bc66df7d6ff6460d989071c28426f9b882,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721418020526868452,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f08d7fe60cc6a65a21ef74a31a923766,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c630159ccd425562f552331464be0b4217958f3a1a9cc41403dcd4ea6840e2,PodSandboxId:38c2e5a5bf35048fc50640c4dd17109cbbde7ae04179abb0cc36112e9f94032a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721418020467174975,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb5f2073312f2003bff042479372294c,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49caf99a342cff7f9ed1c959f09529dd2f983f2268ec2b5947380d81ed812e95,PodSandboxId:f0e74b3e6aae6abed27d6900acb8d2f63d924020bd6fae3d87b118536bd8941f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721417734430001969,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f08d7fe60cc6a65a21ef74a31a923766,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3ab79b97-19ed-4169-8ee2-ff0d7bfd102e name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:49:43 no-preload-971041 crio[727]: time="2024-07-19 19:49:43.770957541Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8ed5f532-b4f5-4caf-95af-1285f3563e46 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:49:43 no-preload-971041 crio[727]: time="2024-07-19 19:49:43.771039054Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8ed5f532-b4f5-4caf-95af-1285f3563e46 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:49:43 no-preload-971041 crio[727]: time="2024-07-19 19:49:43.772796151Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=42176748-ebb2-4543-8b62-6026d1ac1980 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:49:43 no-preload-971041 crio[727]: time="2024-07-19 19:49:43.773132938Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721418583773112271,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=42176748-ebb2-4543-8b62-6026d1ac1980 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:49:43 no-preload-971041 crio[727]: time="2024-07-19 19:49:43.774019917Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e9dc395c-b24c-4756-8b59-a98a51e80128 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:49:43 no-preload-971041 crio[727]: time="2024-07-19 19:49:43.774094451Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e9dc395c-b24c-4756-8b59-a98a51e80128 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:49:43 no-preload-971041 crio[727]: time="2024-07-19 19:49:43.774308689Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4026f272ae2afbd759a2eead22c7ba19769bf5996b34bc2a300786ea12071bf,PodSandboxId:09dac1725945b48e94edb63c465b1519ae04965dc88460932fc9f1b97dec7794,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721418032997486470,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fedb9d1a-139b-4627-8efa-64dd1481d8d4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bd028fbdc83b77e87fff78ccc73a5bbe9f449056bce3b02a01f07266aa177cd,PodSandboxId:b543f5028c7b46b286a2fcc1827b77f412d70c8a609dd9af447f9e3fbac92d11,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721418032909002159,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-7kdqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b0eeec5-f993-4d90-a3e0-5254bcc39f64,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:320e4f1c648899d65709fe2b59481bf622dac880587ad684553a7dfb62278509,PodSandboxId:112d13271d54bf2abe4d3b208c3c0bd17d9af1237ee2e3ca26d44d9648e711f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721418032760510232,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-d5xr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f
5a96c3-84ef-429c-b31d-472fcc247862,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143b1d70f2200d56bbf9db465436c00dc0c280f563a16bda424a5482a45e9b5,PodSandboxId:ec119dcaf5f52dd347a19075df89aab7fa8c99567508a8b165d90bbf0233ce7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721418031411794224,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-445wh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 468d052a-a8d2-47f6-9054-813dd3ee33ee,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d1093d162109f27dc8b8cdbda430ea0f0089db5cdc9582fbba6c9f62db6ebe0,PodSandboxId:27241a85a80e07d726d2e27488ff9698351c76e51cd6554bcca010f09b41c300,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721418020630468660,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 204ea3c3d8c5129c562f4a760458c051,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:007fed8712ef2f9d32924f70d3f28121bc2d8aaaf3f7a1073545252fe4aaab89,PodSandboxId:33f1d8ea18a503178d8c13dd23af9b4b5c4ce1846556a6bdca7716c512ca13ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721418020537823130,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb080b6fd04c15f65640b834bf6dffcd,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4bc3716261bce674e7f77a1d8e7872ff35c8557bed7653afde8c882698ffd8,PodSandboxId:010a590647984114e3de579b710f60bc66df7d6ff6460d989071c28426f9b882,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721418020526868452,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f08d7fe60cc6a65a21ef74a31a923766,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c630159ccd425562f552331464be0b4217958f3a1a9cc41403dcd4ea6840e2,PodSandboxId:38c2e5a5bf35048fc50640c4dd17109cbbde7ae04179abb0cc36112e9f94032a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721418020467174975,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb5f2073312f2003bff042479372294c,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49caf99a342cff7f9ed1c959f09529dd2f983f2268ec2b5947380d81ed812e95,PodSandboxId:f0e74b3e6aae6abed27d6900acb8d2f63d924020bd6fae3d87b118536bd8941f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721417734430001969,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f08d7fe60cc6a65a21ef74a31a923766,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e9dc395c-b24c-4756-8b59-a98a51e80128 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:49:43 no-preload-971041 crio[727]: time="2024-07-19 19:49:43.810636888Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fdf61d54-9ede-412a-b8d3-f6ab09453743 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:49:43 no-preload-971041 crio[727]: time="2024-07-19 19:49:43.810743910Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fdf61d54-9ede-412a-b8d3-f6ab09453743 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:49:43 no-preload-971041 crio[727]: time="2024-07-19 19:49:43.811615148Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b362e820-0a24-495a-a62a-998c15b795c4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:49:43 no-preload-971041 crio[727]: time="2024-07-19 19:49:43.812151319Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721418583812125336,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b362e820-0a24-495a-a62a-998c15b795c4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:49:43 no-preload-971041 crio[727]: time="2024-07-19 19:49:43.812808877Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eba726b4-da4a-4a7a-be2e-cc813d72cce3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:49:43 no-preload-971041 crio[727]: time="2024-07-19 19:49:43.812871745Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eba726b4-da4a-4a7a-be2e-cc813d72cce3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:49:43 no-preload-971041 crio[727]: time="2024-07-19 19:49:43.813082086Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4026f272ae2afbd759a2eead22c7ba19769bf5996b34bc2a300786ea12071bf,PodSandboxId:09dac1725945b48e94edb63c465b1519ae04965dc88460932fc9f1b97dec7794,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721418032997486470,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fedb9d1a-139b-4627-8efa-64dd1481d8d4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bd028fbdc83b77e87fff78ccc73a5bbe9f449056bce3b02a01f07266aa177cd,PodSandboxId:b543f5028c7b46b286a2fcc1827b77f412d70c8a609dd9af447f9e3fbac92d11,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721418032909002159,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-7kdqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b0eeec5-f993-4d90-a3e0-5254bcc39f64,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:320e4f1c648899d65709fe2b59481bf622dac880587ad684553a7dfb62278509,PodSandboxId:112d13271d54bf2abe4d3b208c3c0bd17d9af1237ee2e3ca26d44d9648e711f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721418032760510232,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-d5xr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f
5a96c3-84ef-429c-b31d-472fcc247862,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143b1d70f2200d56bbf9db465436c00dc0c280f563a16bda424a5482a45e9b5,PodSandboxId:ec119dcaf5f52dd347a19075df89aab7fa8c99567508a8b165d90bbf0233ce7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721418031411794224,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-445wh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 468d052a-a8d2-47f6-9054-813dd3ee33ee,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d1093d162109f27dc8b8cdbda430ea0f0089db5cdc9582fbba6c9f62db6ebe0,PodSandboxId:27241a85a80e07d726d2e27488ff9698351c76e51cd6554bcca010f09b41c300,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721418020630468660,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 204ea3c3d8c5129c562f4a760458c051,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:007fed8712ef2f9d32924f70d3f28121bc2d8aaaf3f7a1073545252fe4aaab89,PodSandboxId:33f1d8ea18a503178d8c13dd23af9b4b5c4ce1846556a6bdca7716c512ca13ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721418020537823130,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb080b6fd04c15f65640b834bf6dffcd,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4bc3716261bce674e7f77a1d8e7872ff35c8557bed7653afde8c882698ffd8,PodSandboxId:010a590647984114e3de579b710f60bc66df7d6ff6460d989071c28426f9b882,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721418020526868452,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f08d7fe60cc6a65a21ef74a31a923766,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c630159ccd425562f552331464be0b4217958f3a1a9cc41403dcd4ea6840e2,PodSandboxId:38c2e5a5bf35048fc50640c4dd17109cbbde7ae04179abb0cc36112e9f94032a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721418020467174975,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb5f2073312f2003bff042479372294c,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49caf99a342cff7f9ed1c959f09529dd2f983f2268ec2b5947380d81ed812e95,PodSandboxId:f0e74b3e6aae6abed27d6900acb8d2f63d924020bd6fae3d87b118536bd8941f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721417734430001969,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f08d7fe60cc6a65a21ef74a31a923766,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eba726b4-da4a-4a7a-be2e-cc813d72cce3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d4026f272ae2a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   09dac1725945b       storage-provisioner
	9bd028fbdc83b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   b543f5028c7b4       coredns-5cfdc65f69-7kdqd
	320e4f1c64889       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   112d13271d54b       coredns-5cfdc65f69-d5xr6
	a143b1d70f220       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   9 minutes ago       Running             kube-proxy                0                   ec119dcaf5f52       kube-proxy-445wh
	6d1093d162109       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   9 minutes ago       Running             etcd                      2                   27241a85a80e0       etcd-no-preload-971041
	007fed8712ef2       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   9 minutes ago       Running             kube-scheduler            2                   33f1d8ea18a50       kube-scheduler-no-preload-971041
	7b4bc3716261b       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   9 minutes ago       Running             kube-apiserver            2                   010a590647984       kube-apiserver-no-preload-971041
	f2c630159ccd4       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   9 minutes ago       Running             kube-controller-manager   2                   38c2e5a5bf350       kube-controller-manager-no-preload-971041
	49caf99a342cf       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   14 minutes ago      Exited              kube-apiserver            1                   f0e74b3e6aae6       kube-apiserver-no-preload-971041
	
	
	==> coredns [320e4f1c648899d65709fe2b59481bf622dac880587ad684553a7dfb62278509] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [9bd028fbdc83b77e87fff78ccc73a5bbe9f449056bce3b02a01f07266aa177cd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-971041
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-971041
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221
	                    minikube.k8s.io/name=no-preload-971041
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T19_40_26_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 19:40:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-971041
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 19:49:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 19:45:43 +0000   Fri, 19 Jul 2024 19:40:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 19:45:43 +0000   Fri, 19 Jul 2024 19:40:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 19:45:43 +0000   Fri, 19 Jul 2024 19:40:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 19:45:43 +0000   Fri, 19 Jul 2024 19:40:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.119
	  Hostname:    no-preload-971041
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6b4cbd261858437eabb71e1d91bdb15f
	  System UUID:                6b4cbd26-1858-437e-abb7-1e1d91bdb15f
	  Boot ID:                    13dfdfb1-c68e-4be1-bf8e-2dde7420e709
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-7kdqd                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m13s
	  kube-system                 coredns-5cfdc65f69-d5xr6                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m13s
	  kube-system                 etcd-no-preload-971041                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-apiserver-no-preload-971041             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-controller-manager-no-preload-971041    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-proxy-445wh                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m14s
	  kube-system                 kube-scheduler-no-preload-971041             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 metrics-server-78fcd8795b-r6hdb              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m12s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m12s  kube-proxy       
	  Normal  Starting                 9m19s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m19s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m18s  kubelet          Node no-preload-971041 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m18s  kubelet          Node no-preload-971041 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m18s  kubelet          Node no-preload-971041 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m14s  node-controller  Node no-preload-971041 event: Registered Node no-preload-971041 in Controller
	
	
	==> dmesg <==
	[  +0.043904] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jul19 19:35] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.838577] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.531345] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.474200] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.063994] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060522] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.185227] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.145427] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[  +0.289259] systemd-fstab-generator[710]: Ignoring "noauto" option for root device
	[ +14.713791] systemd-fstab-generator[1173]: Ignoring "noauto" option for root device
	[  +0.056477] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.943248] systemd-fstab-generator[1295]: Ignoring "noauto" option for root device
	[  +3.399678] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.013528] kauditd_printk_skb: 79 callbacks suppressed
	[Jul19 19:36] kauditd_printk_skb: 8 callbacks suppressed
	[Jul19 19:40] kauditd_printk_skb: 6 callbacks suppressed
	[  +1.335736] systemd-fstab-generator[2952]: Ignoring "noauto" option for root device
	[  +4.842933] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.712419] systemd-fstab-generator[3265]: Ignoring "noauto" option for root device
	[  +5.688611] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.205528] systemd-fstab-generator[3446]: Ignoring "noauto" option for root device
	[  +7.324737] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [6d1093d162109f27dc8b8cdbda430ea0f0089db5cdc9582fbba6c9f62db6ebe0] <==
	{"level":"info","ts":"2024-07-19T19:40:20.909179Z","caller":"embed/etcd.go:727","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-19T19:40:20.909433Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dbf7131a3d9cd4a2","initial-advertise-peer-urls":["https://192.168.50.119:2380"],"listen-peer-urls":["https://192.168.50.119:2380"],"advertise-client-urls":["https://192.168.50.119:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.119:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-19T19:40:20.909474Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-19T19:40:20.909539Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.50.119:2380"}
	{"level":"info","ts":"2024-07-19T19:40:20.909572Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.50.119:2380"}
	{"level":"info","ts":"2024-07-19T19:40:21.547832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dbf7131a3d9cd4a2 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-19T19:40:21.547939Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dbf7131a3d9cd4a2 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-19T19:40:21.548016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dbf7131a3d9cd4a2 received MsgPreVoteResp from dbf7131a3d9cd4a2 at term 1"}
	{"level":"info","ts":"2024-07-19T19:40:21.548073Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dbf7131a3d9cd4a2 became candidate at term 2"}
	{"level":"info","ts":"2024-07-19T19:40:21.548097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dbf7131a3d9cd4a2 received MsgVoteResp from dbf7131a3d9cd4a2 at term 2"}
	{"level":"info","ts":"2024-07-19T19:40:21.548147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dbf7131a3d9cd4a2 became leader at term 2"}
	{"level":"info","ts":"2024-07-19T19:40:21.548187Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dbf7131a3d9cd4a2 elected leader dbf7131a3d9cd4a2 at term 2"}
	{"level":"info","ts":"2024-07-19T19:40:21.553073Z","caller":"etcdserver/server.go:2628","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T19:40:21.555197Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T19:40:21.556192Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-19T19:40:21.559807Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T19:40:21.559847Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-19T19:40:21.559019Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"753d5ab11f342e1f","local-member-id":"dbf7131a3d9cd4a2","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T19:40:21.566819Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T19:40:21.566892Z","caller":"etcdserver/server.go:2652","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T19:40:21.559048Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T19:40:21.559089Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"dbf7131a3d9cd4a2","local-member-attributes":"{Name:no-preload-971041 ClientURLs:[https://192.168.50.119:2379]}","request-path":"/0/members/dbf7131a3d9cd4a2/attributes","cluster-id":"753d5ab11f342e1f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-19T19:40:21.567788Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-19T19:40:21.568583Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.119:2379"}
	{"level":"info","ts":"2024-07-19T19:40:21.570833Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:49:44 up 14 min,  0 users,  load average: 0.53, 0.38, 0.27
	Linux no-preload-971041 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [49caf99a342cff7f9ed1c959f09529dd2f983f2268ec2b5947380d81ed812e95] <==
	W0719 19:40:14.471566       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:14.471967       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:14.473458       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:14.485333       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:14.517829       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:14.538567       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:14.563321       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:14.604320       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:14.633280       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:14.707558       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:14.716823       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:14.809960       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:14.838244       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:14.867439       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:14.871997       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:14.880450       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:14.939039       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:14.995399       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:15.057127       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:15.127202       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:15.145617       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:15.149383       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:15.156971       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:15.259503       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:15.321313       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [7b4bc3716261bce674e7f77a1d8e7872ff35c8557bed7653afde8c882698ffd8] <==
	W0719 19:45:24.277351       1 handler_proxy.go:99] no RequestInfo found in the context
	E0719 19:45:24.277419       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0719 19:45:24.278438       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0719 19:45:24.278477       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 19:46:24.279625       1 handler_proxy.go:99] no RequestInfo found in the context
	W0719 19:46:24.279658       1 handler_proxy.go:99] no RequestInfo found in the context
	E0719 19:46:24.279764       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0719 19:46:24.279768       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0719 19:46:24.280913       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0719 19:46:24.280942       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 19:48:24.281505       1 handler_proxy.go:99] no RequestInfo found in the context
	E0719 19:48:24.281923       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0719 19:48:24.281505       1 handler_proxy.go:99] no RequestInfo found in the context
	E0719 19:48:24.282025       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0719 19:48:24.283378       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0719 19:48:24.283422       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [f2c630159ccd425562f552331464be0b4217958f3a1a9cc41403dcd4ea6840e2] <==
	I0719 19:44:31.204489       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:44:31.240388       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 19:45:01.212363       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:45:01.246764       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 19:45:31.221667       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:45:31.252421       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 19:45:43.093534       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-971041"
	I0719 19:46:01.230547       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:46:01.259112       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 19:46:28.061995       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="395.734µs"
	I0719 19:46:31.238464       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:46:31.264626       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 19:46:43.049111       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="151.432µs"
	I0719 19:47:01.247640       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:47:01.269412       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 19:47:31.257273       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:47:31.275532       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 19:48:01.265844       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:48:01.282248       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 19:48:31.276018       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:48:31.289484       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 19:49:01.285413       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:49:01.295027       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 19:49:31.296670       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:49:31.300419       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	
	
	==> kube-proxy [a143b1d70f2200d56bbf9db465436c00dc0c280f563a16bda424a5482a45e9b5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0719 19:40:31.687563       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0719 19:40:31.705669       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.50.119"]
	E0719 19:40:31.705986       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0719 19:40:31.746525       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0719 19:40:31.746607       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 19:40:31.746650       1 server_linux.go:170] "Using iptables Proxier"
	I0719 19:40:31.749078       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0719 19:40:31.749330       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0719 19:40:31.749526       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 19:40:31.752264       1 config.go:197] "Starting service config controller"
	I0719 19:40:31.752296       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 19:40:31.752321       1 config.go:104] "Starting endpoint slice config controller"
	I0719 19:40:31.752331       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 19:40:31.753631       1 config.go:326] "Starting node config controller"
	I0719 19:40:31.753657       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 19:40:31.853096       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 19:40:31.853259       1 shared_informer.go:320] Caches are synced for service config
	I0719 19:40:31.853783       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [007fed8712ef2f9d32924f70d3f28121bc2d8aaaf3f7a1073545252fe4aaab89] <==
	W0719 19:40:23.352447       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0719 19:40:23.352514       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0719 19:40:23.352554       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0719 19:40:23.352657       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 19:40:23.355037       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0719 19:40:23.354866       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0719 19:40:23.355077       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0719 19:40:23.354944       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 19:40:23.355149       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0719 19:40:23.352522       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0719 19:40:24.158752       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 19:40:24.158867       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0719 19:40:24.234501       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0719 19:40:24.234550       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0719 19:40:24.355940       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 19:40:24.355991       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0719 19:40:24.518926       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 19:40:24.519045       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0719 19:40:24.533178       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0719 19:40:24.533239       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0719 19:40:24.541573       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 19:40:24.541677       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0719 19:40:24.714083       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 19:40:24.714349       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0719 19:40:27.617897       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 19 19:47:26 no-preload-971041 kubelet[3272]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 19:47:26 no-preload-971041 kubelet[3272]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 19:47:26 no-preload-971041 kubelet[3272]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 19:47:26 no-preload-971041 kubelet[3272]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 19:47:34 no-preload-971041 kubelet[3272]: E0719 19:47:34.033102    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-r6hdb" podUID="13d26205-03d4-4719-bfa0-199a2e23b52a"
	Jul 19 19:47:47 no-preload-971041 kubelet[3272]: E0719 19:47:47.033308    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-r6hdb" podUID="13d26205-03d4-4719-bfa0-199a2e23b52a"
	Jul 19 19:48:01 no-preload-971041 kubelet[3272]: E0719 19:48:01.032940    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-r6hdb" podUID="13d26205-03d4-4719-bfa0-199a2e23b52a"
	Jul 19 19:48:12 no-preload-971041 kubelet[3272]: E0719 19:48:12.033975    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-r6hdb" podUID="13d26205-03d4-4719-bfa0-199a2e23b52a"
	Jul 19 19:48:23 no-preload-971041 kubelet[3272]: E0719 19:48:23.034119    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-r6hdb" podUID="13d26205-03d4-4719-bfa0-199a2e23b52a"
	Jul 19 19:48:26 no-preload-971041 kubelet[3272]: E0719 19:48:26.067914    3272 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 19:48:26 no-preload-971041 kubelet[3272]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 19:48:26 no-preload-971041 kubelet[3272]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 19:48:26 no-preload-971041 kubelet[3272]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 19:48:26 no-preload-971041 kubelet[3272]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 19:48:36 no-preload-971041 kubelet[3272]: E0719 19:48:36.033973    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-r6hdb" podUID="13d26205-03d4-4719-bfa0-199a2e23b52a"
	Jul 19 19:48:49 no-preload-971041 kubelet[3272]: E0719 19:48:49.032868    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-r6hdb" podUID="13d26205-03d4-4719-bfa0-199a2e23b52a"
	Jul 19 19:49:00 no-preload-971041 kubelet[3272]: E0719 19:49:00.035300    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-r6hdb" podUID="13d26205-03d4-4719-bfa0-199a2e23b52a"
	Jul 19 19:49:15 no-preload-971041 kubelet[3272]: E0719 19:49:15.033408    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-r6hdb" podUID="13d26205-03d4-4719-bfa0-199a2e23b52a"
	Jul 19 19:49:26 no-preload-971041 kubelet[3272]: E0719 19:49:26.035044    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-r6hdb" podUID="13d26205-03d4-4719-bfa0-199a2e23b52a"
	Jul 19 19:49:26 no-preload-971041 kubelet[3272]: E0719 19:49:26.067579    3272 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 19:49:26 no-preload-971041 kubelet[3272]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 19:49:26 no-preload-971041 kubelet[3272]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 19:49:26 no-preload-971041 kubelet[3272]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 19:49:26 no-preload-971041 kubelet[3272]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 19:49:41 no-preload-971041 kubelet[3272]: E0719 19:49:41.033431    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-r6hdb" podUID="13d26205-03d4-4719-bfa0-199a2e23b52a"
	
	
	==> storage-provisioner [d4026f272ae2afbd759a2eead22c7ba19769bf5996b34bc2a300786ea12071bf] <==
	I0719 19:40:33.345865       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 19:40:33.371192       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 19:40:33.371330       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0719 19:40:33.387320       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0719 19:40:33.387921       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-971041_ced764da-befb-41f0-a52c-e1ef0fbb6e8f!
	I0719 19:40:33.387959       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6dd3d92e-4654-477a-8925-bc8a8939fedb", APIVersion:"v1", ResourceVersion:"398", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-971041_ced764da-befb-41f0-a52c-e1ef0fbb6e8f became leader
	I0719 19:40:33.488778       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-971041_ced764da-befb-41f0-a52c-e1ef0fbb6e8f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-971041 -n no-preload-971041
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-971041 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-78fcd8795b-r6hdb
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-971041 describe pod metrics-server-78fcd8795b-r6hdb
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-971041 describe pod metrics-server-78fcd8795b-r6hdb: exit status 1 (62.257706ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-78fcd8795b-r6hdb" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-971041 describe pod metrics-server-78fcd8795b-r6hdb: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
E0719 19:44:43.525727   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/functional-788436/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
E0719 19:47:32.426287   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
E0719 19:49:43.525140   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/functional-788436/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
E0719 19:50:35.474596   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-570369 -n old-k8s-version-570369
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-570369 -n old-k8s-version-570369: exit status 2 (222.195033ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-570369" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-570369 -n old-k8s-version-570369
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-570369 -n old-k8s-version-570369: exit status 2 (228.745503ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-570369 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-570369 logs -n 25: (1.547852356s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p kubernetes-upgrade-821162                           | kubernetes-upgrade-821162    | jenkins | v1.33.1 | 19 Jul 24 19:25 UTC | 19 Jul 24 19:25 UTC |
	| start   | -p embed-certs-889918                                  | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:25 UTC | 19 Jul 24 19:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| pause   | -p pause-666146                                        | pause-666146                 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| unpause | -p pause-666146                                        | pause-666146                 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| pause   | -p pause-666146                                        | pause-666146                 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-666146                                        | pause-666146                 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-666146                                        | pause-666146                 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	| delete  | -p                                                     | disable-driver-mounts-994300 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | disable-driver-mounts-994300                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:27 UTC |
	|         | default-k8s-diff-port-578533                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-971041             | no-preload-971041            | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-971041                                   | no-preload-971041            | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-578533  | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:27 UTC | 19 Jul 24 19:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:27 UTC |                     |
	|         | default-k8s-diff-port-578533                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-889918            | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:27 UTC | 19 Jul 24 19:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-889918                                  | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-971041                  | no-preload-971041            | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-971041 --memory=2200                     | no-preload-971041            | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC | 19 Jul 24 19:40 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-570369        | old-k8s-version-570369       | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-578533       | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-889918                 | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:30 UTC | 19 Jul 24 19:40 UTC |
	|         | default-k8s-diff-port-578533                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-889918                                  | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:30 UTC | 19 Jul 24 19:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-570369                              | old-k8s-version-570369       | jenkins | v1.33.1 | 19 Jul 24 19:30 UTC | 19 Jul 24 19:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-570369             | old-k8s-version-570369       | jenkins | v1.33.1 | 19 Jul 24 19:31 UTC | 19 Jul 24 19:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-570369                              | old-k8s-version-570369       | jenkins | v1.33.1 | 19 Jul 24 19:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 19:31:02
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 19:31:02.001058   68995 out.go:291] Setting OutFile to fd 1 ...
	I0719 19:31:02.001478   68995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:31:02.001493   68995 out.go:304] Setting ErrFile to fd 2...
	I0719 19:31:02.001500   68995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:31:02.001955   68995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 19:31:02.002951   68995 out.go:298] Setting JSON to false
	I0719 19:31:02.003871   68995 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8005,"bootTime":1721409457,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 19:31:02.003932   68995 start.go:139] virtualization: kvm guest
	I0719 19:31:02.005683   68995 out.go:177] * [old-k8s-version-570369] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 19:31:02.007426   68995 notify.go:220] Checking for updates...
	I0719 19:31:02.007440   68995 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 19:31:02.008887   68995 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 19:31:02.010101   68995 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:31:02.011382   68995 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 19:31:02.012573   68995 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 19:31:02.013796   68995 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 19:31:02.015353   68995 config.go:182] Loaded profile config "old-k8s-version-570369": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0719 19:31:02.015711   68995 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:31:02.015750   68995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:31:02.030334   68995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42673
	I0719 19:31:02.030703   68995 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:31:02.031167   68995 main.go:141] libmachine: Using API Version  1
	I0719 19:31:02.031204   68995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:31:02.031520   68995 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:31:02.031688   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:31:02.033460   68995 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0719 19:31:02.034684   68995 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 19:31:02.034983   68995 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:31:02.035018   68995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:31:02.049101   68995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44905
	I0719 19:31:02.049495   68995 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:31:02.049992   68995 main.go:141] libmachine: Using API Version  1
	I0719 19:31:02.050006   68995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:31:02.050295   68995 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:31:02.050462   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:31:02.085393   68995 out.go:177] * Using the kvm2 driver based on existing profile
	I0719 19:31:02.086642   68995 start.go:297] selected driver: kvm2
	I0719 19:31:02.086658   68995 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-570369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:31:02.086766   68995 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 19:31:02.087430   68995 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 19:31:02.087497   68995 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19307-14841/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 19:31:02.102357   68995 install.go:137] /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0719 19:31:02.102731   68995 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 19:31:02.102797   68995 cni.go:84] Creating CNI manager for ""
	I0719 19:31:02.102814   68995 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:31:02.102857   68995 start.go:340] cluster config:
	{Name:old-k8s-version-570369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570369 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:31:02.102975   68995 iso.go:125] acquiring lock: {Name:mka126a3710ab8a115eb2a3299b735524430e5f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 19:31:02.104839   68995 out.go:177] * Starting "old-k8s-version-570369" primary control-plane node in "old-k8s-version-570369" cluster
	I0719 19:30:58.496088   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:02.106080   68995 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 19:31:02.106120   68995 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0719 19:31:02.106127   68995 cache.go:56] Caching tarball of preloaded images
	I0719 19:31:02.106200   68995 preload.go:172] Found /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 19:31:02.106217   68995 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0719 19:31:02.106343   68995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/config.json ...
	I0719 19:31:02.106543   68995 start.go:360] acquireMachinesLock for old-k8s-version-570369: {Name:mk45aeee801587237bb965fa260501e9fac6f233 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 19:31:04.576109   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:07.648198   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:13.728139   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:16.800162   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:22.880170   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:25.952158   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:32.032094   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:35.104127   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:41.184155   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:44.256157   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:50.336135   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:53.408126   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:59.488236   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:02.560149   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:08.640107   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:11.712145   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:17.792126   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:20.864127   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:26.944130   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:30.016118   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:36.096094   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:39.168160   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:45.248117   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:48.320210   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:54.400118   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:57.472159   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:03.552119   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:06.624189   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:12.704169   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:15.776085   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:21.856121   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:24.928169   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:31.008143   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:34.080169   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:40.160126   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:43.232131   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:49.312133   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:52.384155   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:55.388766   68536 start.go:364] duration metric: took 3m51.88250847s to acquireMachinesLock for "default-k8s-diff-port-578533"
	I0719 19:33:55.388818   68536 start.go:96] Skipping create...Using existing machine configuration
	I0719 19:33:55.388827   68536 fix.go:54] fixHost starting: 
	I0719 19:33:55.389286   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:33:55.389320   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:33:55.405253   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44877
	I0719 19:33:55.405681   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:33:55.406171   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:33:55.406199   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:33:55.406526   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:33:55.406725   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:33:55.406877   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetState
	I0719 19:33:55.408620   68536 fix.go:112] recreateIfNeeded on default-k8s-diff-port-578533: state=Stopped err=<nil>
	I0719 19:33:55.408664   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	W0719 19:33:55.408859   68536 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 19:33:55.410653   68536 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-578533" ...
	I0719 19:33:55.386190   68019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:33:55.386229   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetMachineName
	I0719 19:33:55.386570   68019 buildroot.go:166] provisioning hostname "no-preload-971041"
	I0719 19:33:55.386604   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetMachineName
	I0719 19:33:55.386798   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:33:55.388625   68019 machine.go:97] duration metric: took 4m37.402512771s to provisionDockerMachine
	I0719 19:33:55.388679   68019 fix.go:56] duration metric: took 4m37.424232342s for fixHost
	I0719 19:33:55.388689   68019 start.go:83] releasing machines lock for "no-preload-971041", held for 4m37.424257446s
	W0719 19:33:55.388716   68019 start.go:714] error starting host: provision: host is not running
	W0719 19:33:55.388829   68019 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0719 19:33:55.388840   68019 start.go:729] Will try again in 5 seconds ...
	I0719 19:33:55.412077   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Start
	I0719 19:33:55.412257   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Ensuring networks are active...
	I0719 19:33:55.413065   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Ensuring network default is active
	I0719 19:33:55.413504   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Ensuring network mk-default-k8s-diff-port-578533 is active
	I0719 19:33:55.413931   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Getting domain xml...
	I0719 19:33:55.414619   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Creating domain...
	I0719 19:33:56.637676   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting to get IP...
	I0719 19:33:56.638545   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:56.638912   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:56.638971   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:56.638901   69604 retry.go:31] will retry after 210.865266ms: waiting for machine to come up
	I0719 19:33:56.851496   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:56.851959   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:56.851986   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:56.851920   69604 retry.go:31] will retry after 336.006729ms: waiting for machine to come up
	I0719 19:33:57.189571   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:57.189916   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:57.189945   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:57.189885   69604 retry.go:31] will retry after 325.384423ms: waiting for machine to come up
	I0719 19:33:57.516504   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:57.517043   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:57.517072   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:57.516989   69604 retry.go:31] will retry after 438.127108ms: waiting for machine to come up
	I0719 19:33:57.956641   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:57.957091   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:57.957110   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:57.957066   69604 retry.go:31] will retry after 656.518154ms: waiting for machine to come up
	I0719 19:34:00.391238   68019 start.go:360] acquireMachinesLock for no-preload-971041: {Name:mk45aeee801587237bb965fa260501e9fac6f233 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 19:33:58.614988   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:58.615352   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:58.615398   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:58.615292   69604 retry.go:31] will retry after 854.967551ms: waiting for machine to come up
	I0719 19:33:59.471470   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:59.471922   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:59.471950   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:59.471872   69604 retry.go:31] will retry after 1.032891444s: waiting for machine to come up
	I0719 19:34:00.506554   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:00.507020   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:34:00.507051   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:34:00.506976   69604 retry.go:31] will retry after 1.30559154s: waiting for machine to come up
	I0719 19:34:01.814516   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:01.814877   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:34:01.814903   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:34:01.814817   69604 retry.go:31] will retry after 1.152581751s: waiting for machine to come up
	I0719 19:34:02.968913   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:02.969375   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:34:02.969399   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:34:02.969317   69604 retry.go:31] will retry after 1.415187183s: waiting for machine to come up
	I0719 19:34:04.386365   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:04.386906   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:34:04.386937   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:34:04.386839   69604 retry.go:31] will retry after 2.854689482s: waiting for machine to come up
	I0719 19:34:07.243500   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:07.244001   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:34:07.244031   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:34:07.243865   69604 retry.go:31] will retry after 2.536982272s: waiting for machine to come up
	I0719 19:34:09.783749   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:09.784224   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:34:09.784256   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:34:09.784163   69604 retry.go:31] will retry after 3.046898328s: waiting for machine to come up
	I0719 19:34:12.833907   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.834440   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Found IP for machine: 192.168.72.104
	I0719 19:34:12.834468   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Reserving static IP address...
	I0719 19:34:12.834484   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has current primary IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.835035   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-578533", mac: "52:54:00:85:b3:02", ip: "192.168.72.104"} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:12.835064   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | skip adding static IP to network mk-default-k8s-diff-port-578533 - found existing host DHCP lease matching {name: "default-k8s-diff-port-578533", mac: "52:54:00:85:b3:02", ip: "192.168.72.104"}
	I0719 19:34:12.835078   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Reserved static IP address: 192.168.72.104
	I0719 19:34:12.835094   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for SSH to be available...
	I0719 19:34:12.835109   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Getting to WaitForSSH function...
	I0719 19:34:12.837426   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.837740   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:12.837766   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.837864   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Using SSH client type: external
	I0719 19:34:12.837885   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Using SSH private key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa (-rw-------)
	I0719 19:34:12.837916   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 19:34:12.837930   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | About to run SSH command:
	I0719 19:34:12.837946   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | exit 0
	I0719 19:34:12.959728   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | SSH cmd err, output: <nil>: 
	I0719 19:34:12.960118   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetConfigRaw
	I0719 19:34:12.960737   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetIP
	I0719 19:34:12.963250   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.963645   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:12.963675   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.963868   68536 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/config.json ...
	I0719 19:34:12.964074   68536 machine.go:94] provisionDockerMachine start ...
	I0719 19:34:12.964092   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:34:12.964326   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:12.966667   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.966982   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:12.967017   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.967148   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:12.967318   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:12.967567   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:12.967727   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:12.967907   68536 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:12.968155   68536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0719 19:34:12.968169   68536 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 19:34:13.063641   68536 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 19:34:13.063685   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetMachineName
	I0719 19:34:13.064015   68536 buildroot.go:166] provisioning hostname "default-k8s-diff-port-578533"
	I0719 19:34:13.064045   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetMachineName
	I0719 19:34:13.064242   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.067158   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.067658   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.067783   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.067785   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.067994   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.068150   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.068309   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.068438   68536 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:13.068638   68536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0719 19:34:13.068653   68536 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-578533 && echo "default-k8s-diff-port-578533" | sudo tee /etc/hostname
	I0719 19:34:13.182279   68536 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-578533
	
	I0719 19:34:13.182304   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.185081   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.185417   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.185440   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.185585   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.185761   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.185925   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.186102   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.186237   68536 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:13.186400   68536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0719 19:34:13.186415   68536 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-578533' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-578533/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-578533' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 19:34:13.292203   68536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:34:13.292246   68536 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 19:34:13.292276   68536 buildroot.go:174] setting up certificates
	I0719 19:34:13.292285   68536 provision.go:84] configureAuth start
	I0719 19:34:13.292315   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetMachineName
	I0719 19:34:13.292630   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetIP
	I0719 19:34:13.295150   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.295466   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.295494   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.295628   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.297608   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.297896   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.297924   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.298039   68536 provision.go:143] copyHostCerts
	I0719 19:34:13.298129   68536 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 19:34:13.298148   68536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 19:34:13.298233   68536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 19:34:13.298352   68536 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 19:34:13.298364   68536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 19:34:13.298405   68536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 19:34:13.298491   68536 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 19:34:13.298503   68536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 19:34:13.298530   68536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 19:34:13.298587   68536 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-578533 san=[127.0.0.1 192.168.72.104 default-k8s-diff-port-578533 localhost minikube]
	I0719 19:34:13.367868   68536 provision.go:177] copyRemoteCerts
	I0719 19:34:13.367923   68536 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 19:34:13.367947   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.370508   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.370761   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.370783   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.370975   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.371154   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.371331   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.371456   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:34:13.984609   68617 start.go:364] duration metric: took 4m7.413784663s to acquireMachinesLock for "embed-certs-889918"
	I0719 19:34:13.984726   68617 start.go:96] Skipping create...Using existing machine configuration
	I0719 19:34:13.984746   68617 fix.go:54] fixHost starting: 
	I0719 19:34:13.985263   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:13.985332   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:14.001938   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42041
	I0719 19:34:14.002372   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:14.002864   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:14.002890   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:14.003223   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:14.003400   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:14.003545   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetState
	I0719 19:34:14.005102   68617 fix.go:112] recreateIfNeeded on embed-certs-889918: state=Stopped err=<nil>
	I0719 19:34:14.005141   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	W0719 19:34:14.005320   68617 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 19:34:14.007389   68617 out.go:177] * Restarting existing kvm2 VM for "embed-certs-889918" ...
	I0719 19:34:13.449436   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 19:34:13.472085   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0719 19:34:13.493354   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 19:34:13.515145   68536 provision.go:87] duration metric: took 222.847636ms to configureAuth
	I0719 19:34:13.515179   68536 buildroot.go:189] setting minikube options for container-runtime
	I0719 19:34:13.515350   68536 config.go:182] Loaded profile config "default-k8s-diff-port-578533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:34:13.515430   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.517974   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.518417   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.518449   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.518608   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.518818   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.519077   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.519266   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.519452   68536 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:13.519605   68536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0719 19:34:13.519619   68536 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 19:34:13.764107   68536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 19:34:13.764143   68536 machine.go:97] duration metric: took 800.055026ms to provisionDockerMachine
	I0719 19:34:13.764155   68536 start.go:293] postStartSetup for "default-k8s-diff-port-578533" (driver="kvm2")
	I0719 19:34:13.764165   68536 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 19:34:13.764182   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:34:13.764505   68536 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 19:34:13.764535   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.767239   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.767669   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.767692   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.767870   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.768050   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.768210   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.768356   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:34:13.846203   68536 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 19:34:13.850376   68536 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 19:34:13.850403   68536 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 19:34:13.850473   68536 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 19:34:13.850562   68536 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 19:34:13.850672   68536 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 19:34:13.859405   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:34:13.880565   68536 start.go:296] duration metric: took 116.397021ms for postStartSetup
	I0719 19:34:13.880605   68536 fix.go:56] duration metric: took 18.491777449s for fixHost
	I0719 19:34:13.880628   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.883335   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.883686   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.883716   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.883906   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.884071   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.884245   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.884390   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.884526   68536 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:13.884730   68536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0719 19:34:13.884743   68536 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 19:34:13.984426   68536 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721417653.961477719
	
	I0719 19:34:13.984454   68536 fix.go:216] guest clock: 1721417653.961477719
	I0719 19:34:13.984464   68536 fix.go:229] Guest: 2024-07-19 19:34:13.961477719 +0000 UTC Remote: 2024-07-19 19:34:13.88060979 +0000 UTC m=+250.513369231 (delta=80.867929ms)
	I0719 19:34:13.984498   68536 fix.go:200] guest clock delta is within tolerance: 80.867929ms
	I0719 19:34:13.984503   68536 start.go:83] releasing machines lock for "default-k8s-diff-port-578533", held for 18.59570944s
	I0719 19:34:13.984529   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:34:13.984822   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetIP
	I0719 19:34:13.987630   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.988146   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.988165   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.988322   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:34:13.988760   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:34:13.988961   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:34:13.989046   68536 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 19:34:13.989094   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.989197   68536 ssh_runner.go:195] Run: cat /version.json
	I0719 19:34:13.989218   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.991820   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.992135   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.992285   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.992329   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.992419   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.992558   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.992584   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.992608   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.992695   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.992798   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.992815   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.992957   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:34:13.992978   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.993119   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:34:14.102452   68536 ssh_runner.go:195] Run: systemctl --version
	I0719 19:34:14.108454   68536 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 19:34:14.256762   68536 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 19:34:14.263615   68536 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 19:34:14.263685   68536 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 19:34:14.280375   68536 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 19:34:14.280396   68536 start.go:495] detecting cgroup driver to use...
	I0719 19:34:14.280465   68536 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 19:34:14.297524   68536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 19:34:14.312273   68536 docker.go:217] disabling cri-docker service (if available) ...
	I0719 19:34:14.312334   68536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 19:34:14.326334   68536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 19:34:14.341338   68536 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 19:34:14.465631   68536 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 19:34:14.603249   68536 docker.go:233] disabling docker service ...
	I0719 19:34:14.603333   68536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 19:34:14.617420   68536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 19:34:14.630612   68536 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 19:34:14.776325   68536 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 19:34:14.890106   68536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 19:34:14.904436   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 19:34:14.921845   68536 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 19:34:14.921921   68536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:14.932155   68536 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 19:34:14.932220   68536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:14.942999   68536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:14.955449   68536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:14.966646   68536 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 19:34:14.977147   68536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:14.988224   68536 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:15.005941   68536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:15.018341   68536 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 19:34:15.028418   68536 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 19:34:15.028502   68536 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 19:34:15.040704   68536 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 19:34:15.050734   68536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:34:15.159960   68536 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 19:34:15.323020   68536 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 19:34:15.323095   68536 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 19:34:15.327549   68536 start.go:563] Will wait 60s for crictl version
	I0719 19:34:15.327616   68536 ssh_runner.go:195] Run: which crictl
	I0719 19:34:15.330907   68536 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 19:34:15.369980   68536 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 19:34:15.370062   68536 ssh_runner.go:195] Run: crio --version
	I0719 19:34:15.399065   68536 ssh_runner.go:195] Run: crio --version
	I0719 19:34:15.426462   68536 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 19:34:14.008924   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Start
	I0719 19:34:14.009109   68617 main.go:141] libmachine: (embed-certs-889918) Ensuring networks are active...
	I0719 19:34:14.009827   68617 main.go:141] libmachine: (embed-certs-889918) Ensuring network default is active
	I0719 19:34:14.010200   68617 main.go:141] libmachine: (embed-certs-889918) Ensuring network mk-embed-certs-889918 is active
	I0719 19:34:14.010575   68617 main.go:141] libmachine: (embed-certs-889918) Getting domain xml...
	I0719 19:34:14.011168   68617 main.go:141] libmachine: (embed-certs-889918) Creating domain...
	I0719 19:34:15.280485   68617 main.go:141] libmachine: (embed-certs-889918) Waiting to get IP...
	I0719 19:34:15.281463   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:15.281945   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:15.282019   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:15.281911   69727 retry.go:31] will retry after 309.614436ms: waiting for machine to come up
	I0719 19:34:15.593592   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:15.594042   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:15.594082   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:15.593990   69727 retry.go:31] will retry after 302.723172ms: waiting for machine to come up
	I0719 19:34:15.898674   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:15.899173   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:15.899203   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:15.899125   69727 retry.go:31] will retry after 419.891108ms: waiting for machine to come up
	I0719 19:34:16.320907   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:16.321399   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:16.321444   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:16.321365   69727 retry.go:31] will retry after 528.113112ms: waiting for machine to come up
	I0719 19:34:15.427710   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetIP
	I0719 19:34:15.430887   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:15.431461   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:15.431493   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:15.431702   68536 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0719 19:34:15.435670   68536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:34:15.447909   68536 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-578533 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-578533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.104 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 19:34:15.448033   68536 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 19:34:15.448080   68536 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:34:15.483687   68536 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0719 19:34:15.483774   68536 ssh_runner.go:195] Run: which lz4
	I0719 19:34:15.487578   68536 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 19:34:15.491770   68536 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 19:34:15.491799   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0719 19:34:16.808717   68536 crio.go:462] duration metric: took 1.321185682s to copy over tarball
	I0719 19:34:16.808781   68536 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 19:34:16.850971   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:16.851465   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:16.851497   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:16.851413   69727 retry.go:31] will retry after 734.178646ms: waiting for machine to come up
	I0719 19:34:17.587364   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:17.587763   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:17.587781   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:17.587722   69727 retry.go:31] will retry after 818.269359ms: waiting for machine to come up
	I0719 19:34:18.407178   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:18.407788   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:18.407815   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:18.407741   69727 retry.go:31] will retry after 1.018825067s: waiting for machine to come up
	I0719 19:34:19.427765   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:19.428247   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:19.428277   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:19.428196   69727 retry.go:31] will retry after 1.005679734s: waiting for machine to come up
	I0719 19:34:20.435288   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:20.435661   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:20.435693   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:20.435611   69727 retry.go:31] will retry after 1.648093342s: waiting for machine to come up
	I0719 19:34:19.006661   68536 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.197857661s)
	I0719 19:34:19.006696   68536 crio.go:469] duration metric: took 2.197951458s to extract the tarball
	I0719 19:34:19.006707   68536 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 19:34:19.043352   68536 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:34:19.083293   68536 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 19:34:19.083316   68536 cache_images.go:84] Images are preloaded, skipping loading
	I0719 19:34:19.083333   68536 kubeadm.go:934] updating node { 192.168.72.104 8444 v1.30.3 crio true true} ...
	I0719 19:34:19.083489   68536 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-578533 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-578533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 19:34:19.083593   68536 ssh_runner.go:195] Run: crio config
	I0719 19:34:19.129638   68536 cni.go:84] Creating CNI manager for ""
	I0719 19:34:19.129667   68536 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:34:19.129697   68536 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 19:34:19.129730   68536 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.104 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-578533 NodeName:default-k8s-diff-port-578533 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 19:34:19.129908   68536 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.104
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-578533"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 19:34:19.129996   68536 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 19:34:19.139345   68536 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 19:34:19.139423   68536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 19:34:19.147942   68536 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0719 19:34:19.163170   68536 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 19:34:19.179000   68536 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0719 19:34:19.195196   68536 ssh_runner.go:195] Run: grep 192.168.72.104	control-plane.minikube.internal$ /etc/hosts
	I0719 19:34:19.198941   68536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:34:19.210202   68536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:34:19.326530   68536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:34:19.343318   68536 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533 for IP: 192.168.72.104
	I0719 19:34:19.343352   68536 certs.go:194] generating shared ca certs ...
	I0719 19:34:19.343371   68536 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:34:19.343537   68536 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 19:34:19.343600   68536 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 19:34:19.343614   68536 certs.go:256] generating profile certs ...
	I0719 19:34:19.343739   68536 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/client.key
	I0719 19:34:19.343808   68536 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/apiserver.key.0a2b4bda
	I0719 19:34:19.343876   68536 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/proxy-client.key
	I0719 19:34:19.344051   68536 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 19:34:19.344112   68536 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 19:34:19.344126   68536 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 19:34:19.344155   68536 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 19:34:19.344183   68536 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 19:34:19.344211   68536 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 19:34:19.344270   68536 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:34:19.345114   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 19:34:19.382756   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 19:34:19.414239   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 19:34:19.448517   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 19:34:19.491082   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0719 19:34:19.514344   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 19:34:19.546679   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 19:34:19.569407   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 19:34:19.591638   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 19:34:19.613336   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 19:34:19.635118   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 19:34:19.656675   68536 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 19:34:19.672292   68536 ssh_runner.go:195] Run: openssl version
	I0719 19:34:19.678232   68536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 19:34:19.688472   68536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 19:34:19.692656   68536 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 19:34:19.692714   68536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 19:34:19.698240   68536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 19:34:19.707912   68536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 19:34:19.718120   68536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:34:19.722206   68536 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:34:19.722257   68536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:34:19.727641   68536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 19:34:19.737694   68536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 19:34:19.747354   68536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 19:34:19.751343   68536 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 19:34:19.751389   68536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 19:34:19.756539   68536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 19:34:19.766256   68536 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 19:34:19.770309   68536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 19:34:19.775957   68536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 19:34:19.781384   68536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 19:34:19.786780   68536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 19:34:19.792104   68536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 19:34:19.797536   68536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 19:34:19.802918   68536 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-578533 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-578533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.104 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:34:19.803026   68536 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 19:34:19.803100   68536 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:34:19.837950   68536 cri.go:89] found id: ""
	I0719 19:34:19.838029   68536 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 19:34:19.847639   68536 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 19:34:19.847667   68536 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 19:34:19.847718   68536 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 19:34:19.856631   68536 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 19:34:19.857641   68536 kubeconfig.go:125] found "default-k8s-diff-port-578533" server: "https://192.168.72.104:8444"
	I0719 19:34:19.859821   68536 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 19:34:19.868724   68536 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.104
	I0719 19:34:19.868764   68536 kubeadm.go:1160] stopping kube-system containers ...
	I0719 19:34:19.868778   68536 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 19:34:19.868844   68536 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:34:19.907378   68536 cri.go:89] found id: ""
	I0719 19:34:19.907445   68536 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 19:34:19.923744   68536 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:34:19.933425   68536 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:34:19.933444   68536 kubeadm.go:157] found existing configuration files:
	
	I0719 19:34:19.933508   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0719 19:34:19.942406   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:34:19.942458   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:34:19.951275   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0719 19:34:19.960233   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:34:19.960296   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:34:19.969289   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0719 19:34:19.977401   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:34:19.977458   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:34:19.986298   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0719 19:34:19.994572   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:34:19.994640   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:34:20.003270   68536 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:34:20.012009   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:20.120255   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:21.398914   68536 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.278623256s)
	I0719 19:34:21.398951   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:21.594827   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:21.650132   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:21.717440   68536 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:34:21.717566   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:22.218530   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:22.718618   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:23.218370   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:22.084801   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:22.085201   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:22.085230   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:22.085166   69727 retry.go:31] will retry after 1.686559667s: waiting for machine to come up
	I0719 19:34:23.773486   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:23.774013   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:23.774039   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:23.773982   69727 retry.go:31] will retry after 2.360164076s: waiting for machine to come up
	I0719 19:34:26.136972   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:26.137506   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:26.137528   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:26.137451   69727 retry.go:31] will retry after 2.922244999s: waiting for machine to come up
	I0719 19:34:23.717755   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:24.217721   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:24.232459   68536 api_server.go:72] duration metric: took 2.515020251s to wait for apiserver process to appear ...
	I0719 19:34:24.232486   68536 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:34:24.232513   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:29.061452   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:29.062023   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:29.062056   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:29.061975   69727 retry.go:31] will retry after 3.928406856s: waiting for machine to come up
	I0719 19:34:29.232943   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:34:29.233002   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:34.468635   68995 start.go:364] duration metric: took 3m32.362055744s to acquireMachinesLock for "old-k8s-version-570369"
	I0719 19:34:34.468688   68995 start.go:96] Skipping create...Using existing machine configuration
	I0719 19:34:34.468696   68995 fix.go:54] fixHost starting: 
	I0719 19:34:34.469206   68995 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:34.469246   68995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:34.488843   68995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36477
	I0719 19:34:34.489238   68995 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:34.489682   68995 main.go:141] libmachine: Using API Version  1
	I0719 19:34:34.489708   68995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:34.490007   68995 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:34.490191   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:34.490327   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetState
	I0719 19:34:34.491830   68995 fix.go:112] recreateIfNeeded on old-k8s-version-570369: state=Stopped err=<nil>
	I0719 19:34:34.491875   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	W0719 19:34:34.492029   68995 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 19:34:34.493991   68995 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-570369" ...
	I0719 19:34:32.995044   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:32.995539   68617 main.go:141] libmachine: (embed-certs-889918) Found IP for machine: 192.168.61.107
	I0719 19:34:32.995571   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has current primary IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:32.995583   68617 main.go:141] libmachine: (embed-certs-889918) Reserving static IP address...
	I0719 19:34:32.996234   68617 main.go:141] libmachine: (embed-certs-889918) Reserved static IP address: 192.168.61.107
	I0719 19:34:32.996289   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "embed-certs-889918", mac: "52:54:00:5d:44:ce", ip: "192.168.61.107"} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:32.996409   68617 main.go:141] libmachine: (embed-certs-889918) Waiting for SSH to be available...
	I0719 19:34:32.996438   68617 main.go:141] libmachine: (embed-certs-889918) DBG | skip adding static IP to network mk-embed-certs-889918 - found existing host DHCP lease matching {name: "embed-certs-889918", mac: "52:54:00:5d:44:ce", ip: "192.168.61.107"}
	I0719 19:34:32.996476   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Getting to WaitForSSH function...
	I0719 19:34:32.998802   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:32.999144   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:32.999166   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:32.999286   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Using SSH client type: external
	I0719 19:34:32.999310   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Using SSH private key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa (-rw-------)
	I0719 19:34:32.999352   68617 main.go:141] libmachine: (embed-certs-889918) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 19:34:32.999371   68617 main.go:141] libmachine: (embed-certs-889918) DBG | About to run SSH command:
	I0719 19:34:32.999404   68617 main.go:141] libmachine: (embed-certs-889918) DBG | exit 0
	I0719 19:34:33.128231   68617 main.go:141] libmachine: (embed-certs-889918) DBG | SSH cmd err, output: <nil>: 
	I0719 19:34:33.128662   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetConfigRaw
	I0719 19:34:33.129434   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetIP
	I0719 19:34:33.132282   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.132578   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.132613   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.132825   68617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/config.json ...
	I0719 19:34:33.133051   68617 machine.go:94] provisionDockerMachine start ...
	I0719 19:34:33.133071   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:33.133272   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:33.135294   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.135617   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.135645   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.135727   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:33.135898   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.136064   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.136208   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:33.136358   68617 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:33.136534   68617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0719 19:34:33.136545   68617 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 19:34:33.247702   68617 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 19:34:33.247736   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetMachineName
	I0719 19:34:33.248042   68617 buildroot.go:166] provisioning hostname "embed-certs-889918"
	I0719 19:34:33.248072   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetMachineName
	I0719 19:34:33.248288   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:33.251010   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.251506   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.251536   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.251739   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:33.251936   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.252136   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.252329   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:33.252586   68617 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:33.252822   68617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0719 19:34:33.252848   68617 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-889918 && echo "embed-certs-889918" | sudo tee /etc/hostname
	I0719 19:34:33.379987   68617 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-889918
	
	I0719 19:34:33.380013   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:33.382728   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.383037   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.383059   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.383212   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:33.383482   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.383662   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.383805   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:33.384004   68617 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:33.384193   68617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0719 19:34:33.384211   68617 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-889918' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-889918/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-889918' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 19:34:33.503798   68617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:34:33.503827   68617 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 19:34:33.503870   68617 buildroot.go:174] setting up certificates
	I0719 19:34:33.503884   68617 provision.go:84] configureAuth start
	I0719 19:34:33.503896   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetMachineName
	I0719 19:34:33.504125   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetIP
	I0719 19:34:33.506371   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.506777   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.506809   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.506914   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:33.509368   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.509751   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.509783   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.509892   68617 provision.go:143] copyHostCerts
	I0719 19:34:33.509957   68617 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 19:34:33.509967   68617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 19:34:33.510035   68617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 19:34:33.510150   68617 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 19:34:33.510163   68617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 19:34:33.510194   68617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 19:34:33.510269   68617 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 19:34:33.510280   68617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 19:34:33.510306   68617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 19:34:33.510369   68617 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.embed-certs-889918 san=[127.0.0.1 192.168.61.107 embed-certs-889918 localhost minikube]
	I0719 19:34:33.799515   68617 provision.go:177] copyRemoteCerts
	I0719 19:34:33.799571   68617 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 19:34:33.799595   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:33.802495   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.802789   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.802817   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.803020   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:33.803245   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.803453   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:33.803618   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:33.889555   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 19:34:33.912176   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0719 19:34:33.934472   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 19:34:33.956877   68617 provision.go:87] duration metric: took 452.977402ms to configureAuth
	I0719 19:34:33.956906   68617 buildroot.go:189] setting minikube options for container-runtime
	I0719 19:34:33.957113   68617 config.go:182] Loaded profile config "embed-certs-889918": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:34:33.957190   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:33.960120   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.960551   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.960583   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.960761   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:33.960948   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.961094   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.961221   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:33.961368   68617 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:33.961587   68617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0719 19:34:33.961608   68617 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 19:34:34.222464   68617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 19:34:34.222508   68617 machine.go:97] duration metric: took 1.089428018s to provisionDockerMachine
	I0719 19:34:34.222519   68617 start.go:293] postStartSetup for "embed-certs-889918" (driver="kvm2")
	I0719 19:34:34.222530   68617 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 19:34:34.222544   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:34.222899   68617 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 19:34:34.222924   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:34.225636   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.226063   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:34.226093   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.226259   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:34.226491   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:34.226686   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:34.226918   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:34.314782   68617 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 19:34:34.318940   68617 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 19:34:34.318966   68617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 19:34:34.319044   68617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 19:34:34.319122   68617 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 19:34:34.319223   68617 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 19:34:34.328873   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:34:34.351760   68617 start.go:296] duration metric: took 129.22587ms for postStartSetup
	I0719 19:34:34.351804   68617 fix.go:56] duration metric: took 20.367059026s for fixHost
	I0719 19:34:34.351821   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:34.354759   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.355195   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:34.355228   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.355405   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:34.355619   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:34.355821   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:34.355994   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:34.356154   68617 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:34.356371   68617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0719 19:34:34.356386   68617 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 19:34:34.468486   68617 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721417674.441309314
	
	I0719 19:34:34.468508   68617 fix.go:216] guest clock: 1721417674.441309314
	I0719 19:34:34.468514   68617 fix.go:229] Guest: 2024-07-19 19:34:34.441309314 +0000 UTC Remote: 2024-07-19 19:34:34.35180736 +0000 UTC m=+267.913178040 (delta=89.501954ms)
	I0719 19:34:34.468531   68617 fix.go:200] guest clock delta is within tolerance: 89.501954ms
	I0719 19:34:34.468535   68617 start.go:83] releasing machines lock for "embed-certs-889918", held for 20.483881359s
	I0719 19:34:34.468559   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:34.468844   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetIP
	I0719 19:34:34.471920   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.472311   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:34.472346   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.472517   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:34.473026   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:34.473229   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:34.473305   68617 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 19:34:34.473364   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:34.473484   68617 ssh_runner.go:195] Run: cat /version.json
	I0719 19:34:34.473513   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:34.476087   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.476262   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.476545   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:34.476571   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.476614   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:34.476834   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:34.476835   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.476988   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:34.477070   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:34.477226   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:34.477293   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:34.477383   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:34.477453   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:34.477536   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:34.560612   68617 ssh_runner.go:195] Run: systemctl --version
	I0719 19:34:34.593072   68617 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 19:34:34.734988   68617 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 19:34:34.741527   68617 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 19:34:34.741637   68617 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 19:34:34.758845   68617 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 19:34:34.758869   68617 start.go:495] detecting cgroup driver to use...
	I0719 19:34:34.758924   68617 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 19:34:34.774350   68617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 19:34:34.787003   68617 docker.go:217] disabling cri-docker service (if available) ...
	I0719 19:34:34.787074   68617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 19:34:34.801080   68617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 19:34:34.815679   68617 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 19:34:34.940970   68617 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 19:34:35.101668   68617 docker.go:233] disabling docker service ...
	I0719 19:34:35.101744   68617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 19:34:35.116074   68617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 19:34:35.132085   68617 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 19:34:35.254387   68617 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 19:34:35.380270   68617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 19:34:35.394387   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 19:34:35.411794   68617 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 19:34:35.411877   68617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.421927   68617 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 19:34:35.421978   68617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.432790   68617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.445416   68617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.457170   68617 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 19:34:35.467762   68617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.478775   68617 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.498894   68617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.509643   68617 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 19:34:35.520089   68617 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 19:34:35.520138   68617 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 19:34:35.532934   68617 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 19:34:35.543249   68617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:34:35.702678   68617 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 19:34:35.843858   68617 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 19:34:35.843944   68617 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 19:34:35.848599   68617 start.go:563] Will wait 60s for crictl version
	I0719 19:34:35.848677   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:34:35.852436   68617 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 19:34:35.889698   68617 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 19:34:35.889782   68617 ssh_runner.go:195] Run: crio --version
	I0719 19:34:35.917297   68617 ssh_runner.go:195] Run: crio --version
	I0719 19:34:35.948248   68617 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 19:34:35.949617   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetIP
	I0719 19:34:35.952938   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:35.953304   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:35.953340   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:35.953554   68617 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0719 19:34:35.957815   68617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:34:35.971246   68617 kubeadm.go:883] updating cluster {Name:embed-certs-889918 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-889918 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 19:34:35.971572   68617 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 19:34:35.971628   68617 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:34:36.012019   68617 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0719 19:34:36.012093   68617 ssh_runner.go:195] Run: which lz4
	I0719 19:34:36.017026   68617 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 19:34:36.021166   68617 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 19:34:36.021200   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0719 19:34:34.495371   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .Start
	I0719 19:34:34.495553   68995 main.go:141] libmachine: (old-k8s-version-570369) Ensuring networks are active...
	I0719 19:34:34.496391   68995 main.go:141] libmachine: (old-k8s-version-570369) Ensuring network default is active
	I0719 19:34:34.496864   68995 main.go:141] libmachine: (old-k8s-version-570369) Ensuring network mk-old-k8s-version-570369 is active
	I0719 19:34:34.497347   68995 main.go:141] libmachine: (old-k8s-version-570369) Getting domain xml...
	I0719 19:34:34.498147   68995 main.go:141] libmachine: (old-k8s-version-570369) Creating domain...
	I0719 19:34:35.772867   68995 main.go:141] libmachine: (old-k8s-version-570369) Waiting to get IP...
	I0719 19:34:35.773913   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:35.774431   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:35.774493   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:35.774403   69904 retry.go:31] will retry after 240.258158ms: waiting for machine to come up
	I0719 19:34:36.016002   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:36.016573   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:36.016602   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:36.016519   69904 retry.go:31] will retry after 386.689334ms: waiting for machine to come up
	I0719 19:34:36.405411   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:36.405914   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:36.405957   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:36.405885   69904 retry.go:31] will retry after 295.472664ms: waiting for machine to come up
	I0719 19:34:36.703587   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:36.704082   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:36.704113   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:36.704040   69904 retry.go:31] will retry after 436.936059ms: waiting for machine to come up
	I0719 19:34:34.233826   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:34:34.233873   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:37.351373   68617 crio.go:462] duration metric: took 1.334375789s to copy over tarball
	I0719 19:34:37.351439   68617 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 19:34:39.633138   68617 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.281647989s)
	I0719 19:34:39.633178   68617 crio.go:469] duration metric: took 2.281778927s to extract the tarball
	I0719 19:34:39.633187   68617 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 19:34:39.671219   68617 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:34:39.716854   68617 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 19:34:39.716878   68617 cache_images.go:84] Images are preloaded, skipping loading
	I0719 19:34:39.716885   68617 kubeadm.go:934] updating node { 192.168.61.107 8443 v1.30.3 crio true true} ...
	I0719 19:34:39.717014   68617 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-889918 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-889918 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 19:34:39.717090   68617 ssh_runner.go:195] Run: crio config
	I0719 19:34:39.764571   68617 cni.go:84] Creating CNI manager for ""
	I0719 19:34:39.764597   68617 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:34:39.764610   68617 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 19:34:39.764633   68617 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.107 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-889918 NodeName:embed-certs-889918 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 19:34:39.764796   68617 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-889918"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 19:34:39.764873   68617 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 19:34:39.774416   68617 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 19:34:39.774479   68617 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 19:34:39.783899   68617 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0719 19:34:39.799538   68617 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 19:34:39.815241   68617 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0719 19:34:39.831507   68617 ssh_runner.go:195] Run: grep 192.168.61.107	control-plane.minikube.internal$ /etc/hosts
	I0719 19:34:39.835145   68617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:34:39.846125   68617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:34:39.958886   68617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:34:39.975440   68617 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918 for IP: 192.168.61.107
	I0719 19:34:39.975469   68617 certs.go:194] generating shared ca certs ...
	I0719 19:34:39.975488   68617 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:34:39.975688   68617 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 19:34:39.975742   68617 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 19:34:39.975755   68617 certs.go:256] generating profile certs ...
	I0719 19:34:39.975891   68617 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/client.key
	I0719 19:34:39.975975   68617 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/apiserver.key.edbb2bee
	I0719 19:34:39.976027   68617 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/proxy-client.key
	I0719 19:34:39.976166   68617 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 19:34:39.976206   68617 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 19:34:39.976214   68617 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 19:34:39.976242   68617 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 19:34:39.976267   68617 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 19:34:39.976317   68617 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 19:34:39.976368   68617 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:34:39.976983   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 19:34:40.011234   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 19:34:40.057204   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 19:34:40.079930   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 19:34:40.102045   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0719 19:34:40.124479   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 19:34:40.153087   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 19:34:40.183427   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 19:34:40.206893   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 19:34:40.230582   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 19:34:40.254384   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 19:34:40.278454   68617 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 19:34:40.297538   68617 ssh_runner.go:195] Run: openssl version
	I0719 19:34:40.303579   68617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 19:34:40.315682   68617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 19:34:40.320283   68617 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 19:34:40.320356   68617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 19:34:40.326353   68617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 19:34:40.338564   68617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 19:34:40.349901   68617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:34:40.354229   68617 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:34:40.354291   68617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:34:40.359773   68617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 19:34:40.370212   68617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 19:34:40.383548   68617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 19:34:40.388929   68617 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 19:34:40.388981   68617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 19:34:40.396108   68617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 19:34:40.409065   68617 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 19:34:40.413154   68617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 19:34:40.418763   68617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 19:34:40.426033   68617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 19:34:40.431826   68617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 19:34:40.437469   68617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 19:34:40.443195   68617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 19:34:40.448932   68617 kubeadm.go:392] StartCluster: {Name:embed-certs-889918 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-889918 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:34:40.449086   68617 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 19:34:40.449153   68617 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:34:40.483766   68617 cri.go:89] found id: ""
	I0719 19:34:40.483867   68617 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 19:34:40.493593   68617 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 19:34:40.493618   68617 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 19:34:40.493668   68617 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 19:34:40.502750   68617 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 19:34:40.503917   68617 kubeconfig.go:125] found "embed-certs-889918" server: "https://192.168.61.107:8443"
	I0719 19:34:40.505701   68617 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 19:34:40.516036   68617 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.107
	I0719 19:34:40.516067   68617 kubeadm.go:1160] stopping kube-system containers ...
	I0719 19:34:40.516077   68617 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 19:34:40.516144   68617 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:34:40.562686   68617 cri.go:89] found id: ""
	I0719 19:34:40.562771   68617 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 19:34:40.578349   68617 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:34:40.587316   68617 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:34:40.587349   68617 kubeadm.go:157] found existing configuration files:
	
	I0719 19:34:40.587421   68617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:34:40.597112   68617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:34:40.597165   68617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:34:40.606941   68617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:34:40.616452   68617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:34:40.616495   68617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:34:40.626308   68617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:34:40.635642   68617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:34:40.635683   68617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:34:40.645499   68617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:34:40.655029   68617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:34:40.655080   68617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:34:40.665113   68617 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:34:40.673843   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:40.785511   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:37.142840   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:37.143334   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:37.143364   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:37.143287   69904 retry.go:31] will retry after 500.03346ms: waiting for machine to come up
	I0719 19:34:37.645270   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:37.645920   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:37.645949   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:37.645856   69904 retry.go:31] will retry after 803.488078ms: waiting for machine to come up
	I0719 19:34:38.450546   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:38.451043   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:38.451062   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:38.451000   69904 retry.go:31] will retry after 791.584627ms: waiting for machine to come up
	I0719 19:34:39.244154   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:39.244564   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:39.244597   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:39.244541   69904 retry.go:31] will retry after 1.090004215s: waiting for machine to come up
	I0719 19:34:40.335726   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:40.336278   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:40.336304   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:40.336240   69904 retry.go:31] will retry after 1.629231127s: waiting for machine to come up
	I0719 19:34:41.966947   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:41.967475   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:41.967497   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:41.967436   69904 retry.go:31] will retry after 2.06824211s: waiting for machine to come up
	I0719 19:34:39.234200   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:34:39.234245   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:41.592081   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:41.803419   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:41.898042   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:42.005198   68617 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:34:42.005290   68617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:42.505705   68617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:43.005532   68617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:43.505668   68617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:43.519708   68617 api_server.go:72] duration metric: took 1.514509053s to wait for apiserver process to appear ...
	I0719 19:34:43.519740   68617 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:34:43.519763   68617 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0719 19:34:46.288947   68617 api_server.go:279] https://192.168.61.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:34:46.288983   68617 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:34:46.289000   68617 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0719 19:34:46.306114   68617 api_server.go:279] https://192.168.61.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:34:46.306137   68617 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:34:44.038853   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:44.039330   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:44.039352   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:44.039303   69904 retry.go:31] will retry after 2.688374782s: waiting for machine to come up
	I0719 19:34:46.730865   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:46.731354   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:46.731377   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:46.731316   69904 retry.go:31] will retry after 3.054833322s: waiting for machine to come up
	I0719 19:34:46.519995   68617 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0719 19:34:46.524219   68617 api_server.go:279] https://192.168.61.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:34:46.524260   68617 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:34:47.020849   68617 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0719 19:34:47.029117   68617 api_server.go:279] https://192.168.61.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:34:47.029145   68617 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:34:47.520815   68617 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0719 19:34:47.525739   68617 api_server.go:279] https://192.168.61.107:8443/healthz returned 200:
	ok
	I0719 19:34:47.532240   68617 api_server.go:141] control plane version: v1.30.3
	I0719 19:34:47.532277   68617 api_server.go:131] duration metric: took 4.012528013s to wait for apiserver health ...
	I0719 19:34:47.532290   68617 cni.go:84] Creating CNI manager for ""
	I0719 19:34:47.532300   68617 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:34:47.533969   68617 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 19:34:44.235220   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:34:44.235292   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:44.372339   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": read tcp 192.168.72.1:52608->192.168.72.104:8444: read: connection reset by peer
	I0719 19:34:44.732722   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:44.733383   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": dial tcp 192.168.72.104:8444: connect: connection refused
	I0719 19:34:45.232769   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:47.535361   68617 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 19:34:47.545742   68617 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 19:34:47.570944   68617 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:34:47.582007   68617 system_pods.go:59] 8 kube-system pods found
	I0719 19:34:47.582046   68617 system_pods.go:61] "coredns-7db6d8ff4d-7h48w" [dd691a13-e571-4148-bef0-ba0552277312] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 19:34:47.582065   68617 system_pods.go:61] "etcd-embed-certs-889918" [68eae3c2-11de-4c70-bc28-1f34e1e868fc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 19:34:47.582075   68617 system_pods.go:61] "kube-apiserver-embed-certs-889918" [48f50846-bc9e-444e-a569-98d28550dcff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 19:34:47.582083   68617 system_pods.go:61] "kube-controller-manager-embed-certs-889918" [92e4da69-acef-44a2-9301-726c70294727] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 19:34:47.582091   68617 system_pods.go:61] "kube-proxy-5fcgs" [67ef345b-f2ef-4e18-a55c-91e2c08a37b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0719 19:34:47.582097   68617 system_pods.go:61] "kube-scheduler-embed-certs-889918" [3560778a-031a-486f-9f53-77c243737daf] Running
	I0719 19:34:47.582113   68617 system_pods.go:61] "metrics-server-569cc877fc-sfc85" [fc2a1efe-1728-42d9-bd3d-9eb29af783e9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:34:47.582121   68617 system_pods.go:61] "storage-provisioner" [32274055-6a2d-4cf9-8ae8-3c5f7cdc66db] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 19:34:47.582131   68617 system_pods.go:74] duration metric: took 11.164178ms to wait for pod list to return data ...
	I0719 19:34:47.582153   68617 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:34:47.588678   68617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:34:47.588713   68617 node_conditions.go:123] node cpu capacity is 2
	I0719 19:34:47.588732   68617 node_conditions.go:105] duration metric: took 6.572921ms to run NodePressure ...
	I0719 19:34:47.588751   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:47.869129   68617 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 19:34:47.875206   68617 kubeadm.go:739] kubelet initialised
	I0719 19:34:47.875228   68617 kubeadm.go:740] duration metric: took 6.075227ms waiting for restarted kubelet to initialise ...
	I0719 19:34:47.875239   68617 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:34:47.883454   68617 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:47.889101   68617 pod_ready.go:97] node "embed-certs-889918" hosting pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.889128   68617 pod_ready.go:81] duration metric: took 5.645208ms for pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:47.889149   68617 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-889918" hosting pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.889167   68617 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:47.894286   68617 pod_ready.go:97] node "embed-certs-889918" hosting pod "etcd-embed-certs-889918" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.894312   68617 pod_ready.go:81] duration metric: took 5.135667ms for pod "etcd-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:47.894321   68617 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-889918" hosting pod "etcd-embed-certs-889918" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.894327   68617 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:47.899401   68617 pod_ready.go:97] node "embed-certs-889918" hosting pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.899428   68617 pod_ready.go:81] duration metric: took 5.092368ms for pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:47.899435   68617 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-889918" hosting pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.899441   68617 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:47.976133   68617 pod_ready.go:97] node "embed-certs-889918" hosting pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.976170   68617 pod_ready.go:81] duration metric: took 76.718663ms for pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:47.976183   68617 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-889918" hosting pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.976193   68617 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5fcgs" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:48.374196   68617 pod_ready.go:97] node "embed-certs-889918" hosting pod "kube-proxy-5fcgs" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:48.374232   68617 pod_ready.go:81] duration metric: took 398.025945ms for pod "kube-proxy-5fcgs" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:48.374242   68617 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-889918" hosting pod "kube-proxy-5fcgs" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:48.374250   68617 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:48.574541   68617 pod_ready.go:97] error getting pod "kube-scheduler-embed-certs-889918" in "kube-system" namespace (skipping!): pods "kube-scheduler-embed-certs-889918" not found
	I0719 19:34:48.574575   68617 pod_ready.go:81] duration metric: took 200.319358ms for pod "kube-scheduler-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:48.574585   68617 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-embed-certs-889918" in "kube-system" namespace (skipping!): pods "kube-scheduler-embed-certs-889918" not found
	I0719 19:34:48.574591   68617 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:48.974614   68617 pod_ready.go:97] node "embed-certs-889918" hosting pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:48.974640   68617 pod_ready.go:81] duration metric: took 400.042539ms for pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:48.974648   68617 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-889918" hosting pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:48.974655   68617 pod_ready.go:38] duration metric: took 1.099407444s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:34:48.974672   68617 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 19:34:48.985044   68617 ops.go:34] apiserver oom_adj: -16
	I0719 19:34:48.985072   68617 kubeadm.go:597] duration metric: took 8.491446525s to restartPrimaryControlPlane
	I0719 19:34:48.985084   68617 kubeadm.go:394] duration metric: took 8.536160393s to StartCluster
	I0719 19:34:48.985117   68617 settings.go:142] acquiring lock: {Name:mkcf95271f2ac91a55c2f4ae0f8a0ccaa24777be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:34:48.985216   68617 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:34:48.986883   68617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/kubeconfig: {Name:mk0bbb5f0de0e816a151363ad4427bbf9de52f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:34:48.987110   68617 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 19:34:48.987179   68617 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 19:34:48.987252   68617 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-889918"
	I0719 19:34:48.987271   68617 addons.go:69] Setting default-storageclass=true in profile "embed-certs-889918"
	I0719 19:34:48.987287   68617 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-889918"
	W0719 19:34:48.987297   68617 addons.go:243] addon storage-provisioner should already be in state true
	I0719 19:34:48.987303   68617 addons.go:69] Setting metrics-server=true in profile "embed-certs-889918"
	I0719 19:34:48.987319   68617 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-889918"
	I0719 19:34:48.987331   68617 config.go:182] Loaded profile config "embed-certs-889918": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:34:48.987352   68617 host.go:66] Checking if "embed-certs-889918" exists ...
	I0719 19:34:48.987335   68617 addons.go:234] Setting addon metrics-server=true in "embed-certs-889918"
	W0719 19:34:48.987368   68617 addons.go:243] addon metrics-server should already be in state true
	I0719 19:34:48.987402   68617 host.go:66] Checking if "embed-certs-889918" exists ...
	I0719 19:34:48.987647   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:48.987684   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:48.987692   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:48.987701   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:48.987720   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:48.987723   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:48.988905   68617 out.go:177] * Verifying Kubernetes components...
	I0719 19:34:48.990196   68617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:34:49.003003   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38653
	I0719 19:34:49.003016   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46849
	I0719 19:34:49.003134   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38559
	I0719 19:34:49.003399   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.003454   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.003518   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.003947   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.003967   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.004007   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.004023   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.004100   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.004115   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.004295   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.004351   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.004444   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.004512   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetState
	I0719 19:34:49.004785   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:49.004819   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:49.005005   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:49.005037   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:49.008189   68617 addons.go:234] Setting addon default-storageclass=true in "embed-certs-889918"
	W0719 19:34:49.008212   68617 addons.go:243] addon default-storageclass should already be in state true
	I0719 19:34:49.008242   68617 host.go:66] Checking if "embed-certs-889918" exists ...
	I0719 19:34:49.008617   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:49.008649   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:49.020066   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38847
	I0719 19:34:49.020085   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40037
	I0719 19:34:49.020445   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.020502   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.020882   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.020899   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.021021   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.021049   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.021278   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.021313   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.021473   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetState
	I0719 19:34:49.021511   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetState
	I0719 19:34:49.023274   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:49.023416   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:49.025350   68617 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:34:49.025361   68617 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0719 19:34:49.025824   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32781
	I0719 19:34:49.026239   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.026584   68617 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 19:34:49.026603   68617 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 19:34:49.026621   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:49.026679   68617 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:34:49.026696   68617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 19:34:49.026712   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:49.027527   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.027544   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.027893   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.028407   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:49.028439   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:49.030553   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:49.030579   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:49.031224   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:49.031246   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:49.031274   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:49.031279   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:49.031298   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:49.031375   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:49.031477   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:49.031659   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:49.031717   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:49.031867   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:49.032146   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:49.032524   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:49.043850   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37515
	I0719 19:34:49.044198   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.044657   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.044679   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.045019   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.045222   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetState
	I0719 19:34:49.046751   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:49.046979   68617 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 19:34:49.046997   68617 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 19:34:49.047014   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:49.049718   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:49.050184   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:49.050210   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:49.050371   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:49.050520   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:49.050670   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:49.050852   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:49.167388   68617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:34:49.183694   68617 node_ready.go:35] waiting up to 6m0s for node "embed-certs-889918" to be "Ready" ...
	I0719 19:34:49.290895   68617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:34:49.298027   68617 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 19:34:49.298047   68617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0719 19:34:49.313881   68617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 19:34:49.319432   68617 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 19:34:49.319459   68617 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 19:34:49.357567   68617 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 19:34:49.357591   68617 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 19:34:49.385245   68617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 19:34:50.267101   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.267126   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.267145   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.267136   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.267209   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.267226   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.267407   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.267419   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.267427   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.267434   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.267506   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Closing plugin on server side
	I0719 19:34:50.267527   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Closing plugin on server side
	I0719 19:34:50.267546   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.267553   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.267561   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.267567   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.267576   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.267601   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.267619   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.267630   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.267630   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Closing plugin on server side
	I0719 19:34:50.267652   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.267660   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.267890   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Closing plugin on server side
	I0719 19:34:50.267920   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.267933   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.267945   68617 addons.go:475] Verifying addon metrics-server=true in "embed-certs-889918"
	I0719 19:34:50.268025   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Closing plugin on server side
	I0719 19:34:50.268046   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.268053   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.273372   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.273398   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.273607   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.273621   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.275547   68617 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0719 19:34:50.276813   68617 addons.go:510] duration metric: took 1.289644945s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0719 19:34:51.187125   68617 node_ready.go:53] node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:49.787569   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:49.788100   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:49.788136   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:49.788058   69904 retry.go:31] will retry after 4.206772692s: waiting for machine to come up
	I0719 19:34:50.233851   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:34:50.233900   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:55.468495   68019 start.go:364] duration metric: took 55.077199048s to acquireMachinesLock for "no-preload-971041"
	I0719 19:34:55.468549   68019 start.go:96] Skipping create...Using existing machine configuration
	I0719 19:34:55.468561   68019 fix.go:54] fixHost starting: 
	I0719 19:34:55.469011   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:55.469045   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:55.488854   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32977
	I0719 19:34:55.489237   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:55.489746   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:34:55.489769   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:55.490164   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:55.490353   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:34:55.490535   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetState
	I0719 19:34:55.492080   68019 fix.go:112] recreateIfNeeded on no-preload-971041: state=Stopped err=<nil>
	I0719 19:34:55.492100   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	W0719 19:34:55.492240   68019 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 19:34:55.494051   68019 out.go:177] * Restarting existing kvm2 VM for "no-preload-971041" ...
	I0719 19:34:53.189961   68617 node_ready.go:53] node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:55.688298   68617 node_ready.go:53] node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:53.999402   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.000105   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has current primary IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.000129   68995 main.go:141] libmachine: (old-k8s-version-570369) Found IP for machine: 192.168.39.220
	I0719 19:34:54.000144   68995 main.go:141] libmachine: (old-k8s-version-570369) Reserving static IP address...
	I0719 19:34:54.000535   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "old-k8s-version-570369", mac: "52:54:00:80:e3:2b", ip: "192.168.39.220"} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.000738   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | skip adding static IP to network mk-old-k8s-version-570369 - found existing host DHCP lease matching {name: "old-k8s-version-570369", mac: "52:54:00:80:e3:2b", ip: "192.168.39.220"}
	I0719 19:34:54.000786   68995 main.go:141] libmachine: (old-k8s-version-570369) Reserved static IP address: 192.168.39.220
	I0719 19:34:54.000799   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | Getting to WaitForSSH function...
	I0719 19:34:54.000819   68995 main.go:141] libmachine: (old-k8s-version-570369) Waiting for SSH to be available...
	I0719 19:34:54.003082   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.003489   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.003519   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.003700   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | Using SSH client type: external
	I0719 19:34:54.003733   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | Using SSH private key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa (-rw-------)
	I0719 19:34:54.003769   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.220 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 19:34:54.003786   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | About to run SSH command:
	I0719 19:34:54.003802   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | exit 0
	I0719 19:34:54.131809   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | SSH cmd err, output: <nil>: 
	I0719 19:34:54.132235   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetConfigRaw
	I0719 19:34:54.132946   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetIP
	I0719 19:34:54.135584   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.135977   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.136011   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.136256   68995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/config.json ...
	I0719 19:34:54.136443   68995 machine.go:94] provisionDockerMachine start ...
	I0719 19:34:54.136465   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:54.136815   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.139347   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.139705   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.139733   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.139951   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:54.140132   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.140272   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.140567   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:54.140754   68995 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:54.141006   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:34:54.141018   68995 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 19:34:54.251997   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 19:34:54.252035   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetMachineName
	I0719 19:34:54.252283   68995 buildroot.go:166] provisioning hostname "old-k8s-version-570369"
	I0719 19:34:54.252310   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetMachineName
	I0719 19:34:54.252568   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.255414   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.255812   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.255860   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.256041   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:54.256232   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.256399   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.256586   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:54.256793   68995 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:54.257004   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:34:54.257019   68995 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-570369 && echo "old-k8s-version-570369" | sudo tee /etc/hostname
	I0719 19:34:54.381363   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-570369
	
	I0719 19:34:54.381405   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.384406   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.384697   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.384724   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.384866   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:54.385050   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.385226   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.385399   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:54.385554   68995 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:54.385735   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:34:54.385752   68995 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-570369' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-570369/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-570369' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 19:34:54.504713   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:34:54.504740   68995 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 19:34:54.504778   68995 buildroot.go:174] setting up certificates
	I0719 19:34:54.504786   68995 provision.go:84] configureAuth start
	I0719 19:34:54.504796   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetMachineName
	I0719 19:34:54.505040   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetIP
	I0719 19:34:54.507978   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.508375   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.508407   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.508594   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.510609   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.510923   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.510952   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.511003   68995 provision.go:143] copyHostCerts
	I0719 19:34:54.511068   68995 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 19:34:54.511079   68995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 19:34:54.511147   68995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 19:34:54.511250   68995 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 19:34:54.511260   68995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 19:34:54.511290   68995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 19:34:54.511365   68995 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 19:34:54.511374   68995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 19:34:54.511404   68995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 19:34:54.511470   68995 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-570369 san=[127.0.0.1 192.168.39.220 localhost minikube old-k8s-version-570369]
	I0719 19:34:54.759014   68995 provision.go:177] copyRemoteCerts
	I0719 19:34:54.759064   68995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 19:34:54.759089   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.761770   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.762089   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.762133   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.762262   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:54.762483   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.762633   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:54.762744   68995 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa Username:docker}
	I0719 19:34:54.849819   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0719 19:34:54.874113   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 19:34:54.897979   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 19:34:54.921635   68995 provision.go:87] duration metric: took 416.834667ms to configureAuth
	I0719 19:34:54.921669   68995 buildroot.go:189] setting minikube options for container-runtime
	I0719 19:34:54.921904   68995 config.go:182] Loaded profile config "old-k8s-version-570369": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0719 19:34:54.922004   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.924798   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.925091   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.925121   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.925308   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:54.925611   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.925801   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.925980   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:54.926180   68995 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:54.926442   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:34:54.926471   68995 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 19:34:55.219746   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 19:34:55.219774   68995 machine.go:97] duration metric: took 1.083316626s to provisionDockerMachine
	I0719 19:34:55.219788   68995 start.go:293] postStartSetup for "old-k8s-version-570369" (driver="kvm2")
	I0719 19:34:55.219802   68995 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 19:34:55.219823   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:55.220207   68995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 19:34:55.220250   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:55.223470   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.223803   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:55.223847   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.224015   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:55.224253   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:55.224507   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:55.224686   68995 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa Username:docker}
	I0719 19:34:55.314430   68995 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 19:34:55.318435   68995 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 19:34:55.318461   68995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 19:34:55.318573   68995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 19:34:55.318696   68995 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 19:34:55.318818   68995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 19:34:55.329180   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:34:55.351452   68995 start.go:296] duration metric: took 131.650561ms for postStartSetup
	I0719 19:34:55.351493   68995 fix.go:56] duration metric: took 20.882796261s for fixHost
	I0719 19:34:55.351516   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:55.354069   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.354429   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:55.354449   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.354612   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:55.354786   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:55.354936   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:55.355083   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:55.355295   68995 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:55.355516   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:34:55.355528   68995 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 19:34:55.468304   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721417695.444458011
	
	I0719 19:34:55.468347   68995 fix.go:216] guest clock: 1721417695.444458011
	I0719 19:34:55.468357   68995 fix.go:229] Guest: 2024-07-19 19:34:55.444458011 +0000 UTC Remote: 2024-07-19 19:34:55.351498388 +0000 UTC m=+233.383860409 (delta=92.959623ms)
	I0719 19:34:55.468400   68995 fix.go:200] guest clock delta is within tolerance: 92.959623ms
	I0719 19:34:55.468407   68995 start.go:83] releasing machines lock for "old-k8s-version-570369", held for 20.999740081s
	I0719 19:34:55.468436   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:55.468696   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetIP
	I0719 19:34:55.471650   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.472087   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:55.472126   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.472376   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:55.472870   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:55.473063   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:55.473145   68995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 19:34:55.473190   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:55.473364   68995 ssh_runner.go:195] Run: cat /version.json
	I0719 19:34:55.473392   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:55.476321   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.476682   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.476716   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:55.476736   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.476847   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:55.477024   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:55.477123   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:55.477149   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.477183   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:55.477291   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:55.477341   68995 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa Username:docker}
	I0719 19:34:55.477440   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:55.477555   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:55.477729   68995 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa Username:docker}
	I0719 19:34:55.573427   68995 ssh_runner.go:195] Run: systemctl --version
	I0719 19:34:55.602019   68995 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 19:34:55.759356   68995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 19:34:55.766310   68995 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 19:34:55.766406   68995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 19:34:55.786237   68995 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 19:34:55.786261   68995 start.go:495] detecting cgroup driver to use...
	I0719 19:34:55.786335   68995 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 19:34:55.804512   68995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 19:34:55.818026   68995 docker.go:217] disabling cri-docker service (if available) ...
	I0719 19:34:55.818081   68995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 19:34:55.832737   68995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 19:34:55.848018   68995 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 19:34:55.972092   68995 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 19:34:56.119305   68995 docker.go:233] disabling docker service ...
	I0719 19:34:56.119390   68995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 19:34:56.132814   68995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 19:34:56.145281   68995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 19:34:56.285671   68995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 19:34:56.413786   68995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 19:34:56.428220   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 19:34:56.447626   68995 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0719 19:34:56.447679   68995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:56.458325   68995 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 19:34:56.458419   68995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:56.467977   68995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:56.477127   68995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:56.487342   68995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 19:34:56.499618   68995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 19:34:56.510085   68995 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 19:34:56.510149   68995 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 19:34:56.524045   68995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 19:34:56.533806   68995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:34:56.671207   68995 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 19:34:56.844623   68995 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 19:34:56.844688   68995 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 19:34:56.849762   68995 start.go:563] Will wait 60s for crictl version
	I0719 19:34:56.849829   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:34:56.853768   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 19:34:56.896644   68995 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 19:34:56.896768   68995 ssh_runner.go:195] Run: crio --version
	I0719 19:34:56.924343   68995 ssh_runner.go:195] Run: crio --version
	I0719 19:34:56.952939   68995 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0719 19:34:56.954228   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetIP
	I0719 19:34:56.957173   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:56.957539   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:56.957571   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:56.957784   68995 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 19:34:56.961877   68995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:34:56.973917   68995 kubeadm.go:883] updating cluster {Name:old-k8s-version-570369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-570369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 19:34:56.974058   68995 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 19:34:56.974286   68995 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:34:55.495295   68019 main.go:141] libmachine: (no-preload-971041) Calling .Start
	I0719 19:34:55.495475   68019 main.go:141] libmachine: (no-preload-971041) Ensuring networks are active...
	I0719 19:34:55.496223   68019 main.go:141] libmachine: (no-preload-971041) Ensuring network default is active
	I0719 19:34:55.496554   68019 main.go:141] libmachine: (no-preload-971041) Ensuring network mk-no-preload-971041 is active
	I0719 19:34:55.496971   68019 main.go:141] libmachine: (no-preload-971041) Getting domain xml...
	I0719 19:34:55.497761   68019 main.go:141] libmachine: (no-preload-971041) Creating domain...
	I0719 19:34:56.870332   68019 main.go:141] libmachine: (no-preload-971041) Waiting to get IP...
	I0719 19:34:56.871429   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:56.871920   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:56.871979   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:56.871888   70115 retry.go:31] will retry after 201.799148ms: waiting for machine to come up
	I0719 19:34:57.075216   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:57.075744   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:57.075774   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:57.075692   70115 retry.go:31] will retry after 311.790253ms: waiting for machine to come up
	I0719 19:34:57.389280   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:57.389811   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:57.389836   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:57.389752   70115 retry.go:31] will retry after 434.42849ms: waiting for machine to come up
	I0719 19:34:57.826106   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:57.826558   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:57.826591   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:57.826524   70115 retry.go:31] will retry after 416.820957ms: waiting for machine to come up
	I0719 19:34:55.234692   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:34:55.234729   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:56.690525   68617 node_ready.go:49] node "embed-certs-889918" has status "Ready":"True"
	I0719 19:34:56.690548   68617 node_ready.go:38] duration metric: took 7.506818962s for node "embed-certs-889918" to be "Ready" ...
	I0719 19:34:56.690557   68617 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:34:56.697237   68617 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:56.702754   68617 pod_ready.go:92] pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace has status "Ready":"True"
	I0719 19:34:56.702775   68617 pod_ready.go:81] duration metric: took 5.512702ms for pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:56.702784   68617 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:58.709004   68617 pod_ready.go:102] pod "etcd-embed-certs-889918" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:00.709279   68617 pod_ready.go:92] pod "etcd-embed-certs-889918" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:00.709305   68617 pod_ready.go:81] duration metric: took 4.00651453s for pod "etcd-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.709317   68617 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.715390   68617 pod_ready.go:92] pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:00.715416   68617 pod_ready.go:81] duration metric: took 6.089348ms for pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.715428   68617 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.720718   68617 pod_ready.go:92] pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:00.720754   68617 pod_ready.go:81] duration metric: took 5.316049ms for pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.720768   68617 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5fcgs" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.725631   68617 pod_ready.go:92] pod "kube-proxy-5fcgs" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:00.725660   68617 pod_ready.go:81] duration metric: took 4.884075ms for pod "kube-proxy-5fcgs" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.725671   68617 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:57.028362   68995 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 19:34:57.028462   68995 ssh_runner.go:195] Run: which lz4
	I0719 19:34:57.032617   68995 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 19:34:57.037102   68995 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 19:34:57.037142   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0719 19:34:58.509147   68995 crio.go:462] duration metric: took 1.476562763s to copy over tarball
	I0719 19:34:58.509223   68995 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 19:35:01.576920   68995 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.067667619s)
	I0719 19:35:01.576949   68995 crio.go:469] duration metric: took 3.067772722s to extract the tarball
	I0719 19:35:01.576959   68995 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 19:35:01.622394   68995 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:35:01.658103   68995 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 19:35:01.658135   68995 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 19:35:01.658233   68995 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:01.658257   68995 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:35:01.658262   68995 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:35:01.658236   68995 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0719 19:35:01.658235   68995 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0719 19:35:01.658233   68995 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:35:01.658247   68995 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:35:01.658433   68995 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0719 19:35:01.660336   68995 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0719 19:35:01.660361   68995 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:35:01.660343   68995 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:35:01.660333   68995 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:35:01.660339   68995 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0719 19:35:01.660340   68995 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:35:01.660333   68995 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:01.660714   68995 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0719 19:35:01.895704   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0719 19:35:01.907901   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:35:01.908071   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:35:01.923590   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0719 19:35:01.923831   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:35:01.955327   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:35:01.983003   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0719 19:35:00.235814   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:35:00.235880   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:35:00.893649   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:35:00.893683   68536 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:35:00.893697   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:35:00.916935   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:35:00.916965   68536 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:35:01.233380   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:35:01.242031   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:35:01.242062   68536 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:35:01.732567   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:35:01.737153   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:35:01.737201   68536 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:35:02.232697   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:35:02.238729   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:35:02.238763   68536 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:35:02.733474   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:35:02.738862   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 200:
	ok
	I0719 19:35:02.746708   68536 api_server.go:141] control plane version: v1.30.3
	I0719 19:35:02.746736   68536 api_server.go:131] duration metric: took 38.514241141s to wait for apiserver health ...
	I0719 19:35:02.746748   68536 cni.go:84] Creating CNI manager for ""
	I0719 19:35:02.746757   68536 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:34:58.245957   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:58.246426   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:58.246451   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:58.246364   70115 retry.go:31] will retry after 682.234308ms: waiting for machine to come up
	I0719 19:34:58.930013   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:58.930623   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:58.930642   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:58.930578   70115 retry.go:31] will retry after 798.20186ms: waiting for machine to come up
	I0719 19:34:59.730628   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:59.731191   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:59.731213   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:59.731139   70115 retry.go:31] will retry after 911.96578ms: waiting for machine to come up
	I0719 19:35:00.644805   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:00.645291   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:35:00.645326   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:35:00.645245   70115 retry.go:31] will retry after 1.252978155s: waiting for machine to come up
	I0719 19:35:01.900101   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:01.900824   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:35:01.900848   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:35:01.900727   70115 retry.go:31] will retry after 1.662309997s: waiting for machine to come up
	I0719 19:35:02.966644   68536 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 19:35:03.080565   68536 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 19:35:03.099289   68536 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 19:35:03.123741   68536 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:35:03.243633   68536 system_pods.go:59] 8 kube-system pods found
	I0719 19:35:03.243729   68536 system_pods.go:61] "coredns-7db6d8ff4d-tngnj" [e343f406-a7bc-46e8-986a-a55d3579c8d2] Running
	I0719 19:35:03.243751   68536 system_pods.go:61] "etcd-default-k8s-diff-port-578533" [f800aab8-97bf-4080-b2ec-bac5e070cd14] Running
	I0719 19:35:03.243767   68536 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-578533" [afe884c0-eee2-4c52-b01f-c106afb9a244] Running
	I0719 19:35:03.243795   68536 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-578533" [ecece9ad-d0ed-4cad-b01e-d7254df00dfa] Running
	I0719 19:35:03.243816   68536 system_pods.go:61] "kube-proxy-cdb66" [1daaaaf5-4c7d-4c37-86f7-b52d11621ffa] Running
	I0719 19:35:03.243850   68536 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-578533" [0359836d-2099-43f7-9dae-7c54755818d4] Running
	I0719 19:35:03.243874   68536 system_pods.go:61] "metrics-server-569cc877fc-wf82d" [8ae7709a-082d-4279-9d78-44940b2bdae4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:35:03.243889   68536 system_pods.go:61] "storage-provisioner" [767f30f2-20d0-4340-b70b-0570d42fa007] Running
	I0719 19:35:03.243909   68536 system_pods.go:74] duration metric: took 120.145691ms to wait for pod list to return data ...
	I0719 19:35:03.243942   68536 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:35:02.733443   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:04.734486   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:02.004356   68995 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0719 19:35:02.004410   68995 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0719 19:35:02.004451   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.076774   68995 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0719 19:35:02.076820   68995 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:35:02.076871   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.077195   68995 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0719 19:35:02.077230   68995 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:35:02.077274   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.094076   68995 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0719 19:35:02.094105   68995 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0719 19:35:02.094118   68995 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:35:02.094148   68995 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0719 19:35:02.094169   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.094186   68995 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:35:02.094156   68995 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0719 19:35:02.094233   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.094246   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.100381   68995 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0719 19:35:02.100411   68995 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0719 19:35:02.100416   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0719 19:35:02.100449   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.100545   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:35:02.100547   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:35:02.106653   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:35:02.106700   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0719 19:35:02.106781   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:35:02.214163   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0719 19:35:02.214249   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0719 19:35:02.216256   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0719 19:35:02.216348   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0719 19:35:02.247645   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0719 19:35:02.247723   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0719 19:35:02.247818   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0719 19:35:02.265494   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0719 19:35:02.465885   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:02.607013   68995 cache_images.go:92] duration metric: took 948.858667ms to LoadCachedImages
	W0719 19:35:02.607143   68995 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0719 19:35:02.607167   68995 kubeadm.go:934] updating node { 192.168.39.220 8443 v1.20.0 crio true true} ...
	I0719 19:35:02.607303   68995 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-570369 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 19:35:02.607399   68995 ssh_runner.go:195] Run: crio config
	I0719 19:35:02.661274   68995 cni.go:84] Creating CNI manager for ""
	I0719 19:35:02.661306   68995 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:35:02.661326   68995 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 19:35:02.661353   68995 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.220 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-570369 NodeName:old-k8s-version-570369 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.220"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.220 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0719 19:35:02.661525   68995 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.220
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-570369"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.220
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.220"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 19:35:02.661599   68995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0719 19:35:02.671792   68995 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 19:35:02.671878   68995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 19:35:02.683593   68995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0719 19:35:02.703448   68995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 19:35:02.723494   68995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0719 19:35:02.741880   68995 ssh_runner.go:195] Run: grep 192.168.39.220	control-plane.minikube.internal$ /etc/hosts
	I0719 19:35:02.745968   68995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.220	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:35:02.759015   68995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:35:02.894896   68995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:35:02.912243   68995 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369 for IP: 192.168.39.220
	I0719 19:35:02.912272   68995 certs.go:194] generating shared ca certs ...
	I0719 19:35:02.912295   68995 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:35:02.912468   68995 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 19:35:02.912520   68995 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 19:35:02.912532   68995 certs.go:256] generating profile certs ...
	I0719 19:35:02.912660   68995 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/client.key
	I0719 19:35:02.912732   68995 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.key.41bb110d
	I0719 19:35:02.912784   68995 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/proxy-client.key
	I0719 19:35:02.912919   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 19:35:02.912959   68995 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 19:35:02.912973   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 19:35:02.913006   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 19:35:02.913040   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 19:35:02.913070   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 19:35:02.913125   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:35:02.914001   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 19:35:02.962668   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 19:35:02.997316   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 19:35:03.033868   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 19:35:03.073913   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0719 19:35:03.120607   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 19:35:03.156647   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 19:35:03.196763   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 19:35:03.233170   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 19:35:03.260949   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 19:35:03.286207   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 19:35:03.312158   68995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 19:35:03.333544   68995 ssh_runner.go:195] Run: openssl version
	I0719 19:35:03.339422   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 19:35:03.353775   68995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 19:35:03.359494   68995 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 19:35:03.359597   68995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 19:35:03.365926   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 19:35:03.380164   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 19:35:03.394069   68995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 19:35:03.398358   68995 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 19:35:03.398410   68995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 19:35:03.404158   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 19:35:03.415159   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 19:35:03.425887   68995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:35:03.430584   68995 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:35:03.430641   68995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:35:03.436201   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 19:35:03.448497   68995 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 19:35:03.454026   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 19:35:03.461326   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 19:35:03.467492   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 19:35:03.473959   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 19:35:03.480337   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 19:35:03.487652   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 19:35:03.494941   68995 kubeadm.go:392] StartCluster: {Name:old-k8s-version-570369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-570369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:35:03.495049   68995 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 19:35:03.495095   68995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:35:03.539527   68995 cri.go:89] found id: ""
	I0719 19:35:03.539603   68995 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 19:35:03.550212   68995 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 19:35:03.550232   68995 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 19:35:03.550290   68995 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 19:35:03.562250   68995 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 19:35:03.563624   68995 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-570369" does not appear in /home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:35:03.564665   68995 kubeconfig.go:62] /home/jenkins/minikube-integration/19307-14841/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-570369" cluster setting kubeconfig missing "old-k8s-version-570369" context setting]
	I0719 19:35:03.566027   68995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/kubeconfig: {Name:mk0bbb5f0de0e816a151363ad4427bbf9de52f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:35:03.639264   68995 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 19:35:03.651651   68995 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.220
	I0719 19:35:03.651684   68995 kubeadm.go:1160] stopping kube-system containers ...
	I0719 19:35:03.651698   68995 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 19:35:03.651753   68995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:35:03.694507   68995 cri.go:89] found id: ""
	I0719 19:35:03.694575   68995 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 19:35:03.713478   68995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:35:03.724717   68995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:35:03.724744   68995 kubeadm.go:157] found existing configuration files:
	
	I0719 19:35:03.724803   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:35:03.736032   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:35:03.736099   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:35:03.746972   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:35:03.757246   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:35:03.757384   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:35:03.766980   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:35:03.776428   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:35:03.776493   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:35:03.788238   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:35:03.799163   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:35:03.799230   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:35:03.810607   68995 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:35:03.820069   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:04.035635   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:04.678453   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:04.924967   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:05.021723   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:05.118353   68995 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:35:05.118439   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:05.618699   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:06.119206   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:06.618828   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:03.565550   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:03.566079   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:35:03.566106   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:35:03.566042   70115 retry.go:31] will retry after 1.551544013s: waiting for machine to come up
	I0719 19:35:05.119602   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:05.120126   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:35:05.120147   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:35:05.120073   70115 retry.go:31] will retry after 2.205398899s: waiting for machine to come up
	I0719 19:35:07.327911   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:07.328428   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:35:07.328451   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:35:07.328390   70115 retry.go:31] will retry after 3.255187102s: waiting for machine to come up
	I0719 19:35:03.641957   68536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:35:03.641987   68536 node_conditions.go:123] node cpu capacity is 2
	I0719 19:35:03.642001   68536 node_conditions.go:105] duration metric: took 398.042486ms to run NodePressure ...
	I0719 19:35:03.642020   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:04.658145   68536 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.016096931s)
	I0719 19:35:04.658182   68536 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 19:35:04.664125   68536 retry.go:31] will retry after 336.601067ms: kubelet not initialised
	I0719 19:35:05.007823   68536 retry.go:31] will retry after 366.441912ms: kubelet not initialised
	I0719 19:35:05.379510   68536 retry.go:31] will retry after 463.132181ms: kubelet not initialised
	I0719 19:35:05.848603   68536 retry.go:31] will retry after 1.07509848s: kubelet not initialised
	I0719 19:35:06.931674   68536 kubeadm.go:739] kubelet initialised
	I0719 19:35:06.931699   68536 kubeadm.go:740] duration metric: took 2.273506864s waiting for restarted kubelet to initialise ...
	I0719 19:35:06.931706   68536 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:35:06.941748   68536 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-tngnj" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:07.231890   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:09.732362   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:07.119022   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:07.619075   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:08.119512   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:08.618875   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:09.118565   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:09.619032   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:10.118774   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:10.618802   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:11.119561   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:11.618914   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:10.585455   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:10.585919   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:35:10.585946   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:35:10.585867   70115 retry.go:31] will retry after 3.911827583s: waiting for machine to come up
	I0719 19:35:08.947740   68536 pod_ready.go:102] pod "coredns-7db6d8ff4d-tngnj" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:10.959978   68536 pod_ready.go:102] pod "coredns-7db6d8ff4d-tngnj" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:12.449422   68536 pod_ready.go:92] pod "coredns-7db6d8ff4d-tngnj" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:12.449445   68536 pod_ready.go:81] duration metric: took 5.507672716s for pod "coredns-7db6d8ff4d-tngnj" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:12.449459   68536 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:12.455485   68536 pod_ready.go:92] pod "etcd-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:12.455513   68536 pod_ready.go:81] duration metric: took 6.046494ms for pod "etcd-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:12.455524   68536 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:12.461238   68536 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:12.461259   68536 pod_ready.go:81] duration metric: took 5.727166ms for pod "kube-apiserver-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:12.461268   68536 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:12.230775   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:14.232074   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:16.232335   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:12.119049   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:12.619110   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:13.119295   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:13.619413   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:14.118909   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:14.619324   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:15.118964   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:15.618534   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:16.119209   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:16.619370   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:14.500920   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.501377   68019 main.go:141] libmachine: (no-preload-971041) Found IP for machine: 192.168.50.119
	I0719 19:35:14.501409   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has current primary IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.501420   68019 main.go:141] libmachine: (no-preload-971041) Reserving static IP address...
	I0719 19:35:14.501837   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "no-preload-971041", mac: "52:54:00:72:b8:bd", ip: "192.168.50.119"} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.501869   68019 main.go:141] libmachine: (no-preload-971041) Reserved static IP address: 192.168.50.119
	I0719 19:35:14.501888   68019 main.go:141] libmachine: (no-preload-971041) DBG | skip adding static IP to network mk-no-preload-971041 - found existing host DHCP lease matching {name: "no-preload-971041", mac: "52:54:00:72:b8:bd", ip: "192.168.50.119"}
	I0719 19:35:14.501907   68019 main.go:141] libmachine: (no-preload-971041) DBG | Getting to WaitForSSH function...
	I0719 19:35:14.501923   68019 main.go:141] libmachine: (no-preload-971041) Waiting for SSH to be available...
	I0719 19:35:14.504106   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.504417   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.504451   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.504563   68019 main.go:141] libmachine: (no-preload-971041) DBG | Using SSH client type: external
	I0719 19:35:14.504609   68019 main.go:141] libmachine: (no-preload-971041) DBG | Using SSH private key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa (-rw-------)
	I0719 19:35:14.504650   68019 main.go:141] libmachine: (no-preload-971041) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.119 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 19:35:14.504691   68019 main.go:141] libmachine: (no-preload-971041) DBG | About to run SSH command:
	I0719 19:35:14.504710   68019 main.go:141] libmachine: (no-preload-971041) DBG | exit 0
	I0719 19:35:14.624083   68019 main.go:141] libmachine: (no-preload-971041) DBG | SSH cmd err, output: <nil>: 
	I0719 19:35:14.624404   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetConfigRaw
	I0719 19:35:14.625152   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetIP
	I0719 19:35:14.628074   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.628431   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.628460   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.628756   68019 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/config.json ...
	I0719 19:35:14.628949   68019 machine.go:94] provisionDockerMachine start ...
	I0719 19:35:14.628967   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:35:14.629217   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:14.631862   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.632252   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.632280   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.632505   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:14.632703   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:14.632878   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:14.633052   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:14.633256   68019 main.go:141] libmachine: Using SSH client type: native
	I0719 19:35:14.633484   68019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.119 22 <nil> <nil>}
	I0719 19:35:14.633500   68019 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 19:35:14.732493   68019 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 19:35:14.732533   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetMachineName
	I0719 19:35:14.732918   68019 buildroot.go:166] provisioning hostname "no-preload-971041"
	I0719 19:35:14.732943   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetMachineName
	I0719 19:35:14.733124   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:14.735603   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.735962   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.735998   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.736203   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:14.736432   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:14.736617   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:14.736847   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:14.737046   68019 main.go:141] libmachine: Using SSH client type: native
	I0719 19:35:14.737264   68019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.119 22 <nil> <nil>}
	I0719 19:35:14.737280   68019 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-971041 && echo "no-preload-971041" | sudo tee /etc/hostname
	I0719 19:35:14.850040   68019 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-971041
	
	I0719 19:35:14.850066   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:14.853200   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.853584   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.853618   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.854242   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:14.854505   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:14.854720   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:14.854889   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:14.855089   68019 main.go:141] libmachine: Using SSH client type: native
	I0719 19:35:14.855321   68019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.119 22 <nil> <nil>}
	I0719 19:35:14.855344   68019 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-971041' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-971041/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-971041' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 19:35:14.959910   68019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:35:14.959942   68019 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 19:35:14.959969   68019 buildroot.go:174] setting up certificates
	I0719 19:35:14.959985   68019 provision.go:84] configureAuth start
	I0719 19:35:14.959997   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetMachineName
	I0719 19:35:14.960252   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetIP
	I0719 19:35:14.963262   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.963701   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.963732   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.963997   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:14.966731   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.967155   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.967176   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.967343   68019 provision.go:143] copyHostCerts
	I0719 19:35:14.967418   68019 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 19:35:14.967435   68019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 19:35:14.967506   68019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 19:35:14.967642   68019 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 19:35:14.967657   68019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 19:35:14.967696   68019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 19:35:14.967783   68019 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 19:35:14.967796   68019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 19:35:14.967826   68019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 19:35:14.967924   68019 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.no-preload-971041 san=[127.0.0.1 192.168.50.119 localhost minikube no-preload-971041]
	I0719 19:35:15.077149   68019 provision.go:177] copyRemoteCerts
	I0719 19:35:15.077207   68019 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 19:35:15.077229   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:15.079901   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.080234   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.080265   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.080489   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:15.080694   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.080897   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:15.081069   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:35:15.161684   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 19:35:15.186551   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0719 19:35:15.208816   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 19:35:15.233238   68019 provision.go:87] duration metric: took 273.242567ms to configureAuth
	I0719 19:35:15.233264   68019 buildroot.go:189] setting minikube options for container-runtime
	I0719 19:35:15.233500   68019 config.go:182] Loaded profile config "no-preload-971041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 19:35:15.233593   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:15.236166   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.236575   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.236621   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.236816   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:15.237047   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.237204   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.237393   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:15.237538   68019 main.go:141] libmachine: Using SSH client type: native
	I0719 19:35:15.237727   68019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.119 22 <nil> <nil>}
	I0719 19:35:15.237752   68019 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 19:35:15.489927   68019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 19:35:15.489955   68019 machine.go:97] duration metric: took 860.992755ms to provisionDockerMachine
	I0719 19:35:15.489968   68019 start.go:293] postStartSetup for "no-preload-971041" (driver="kvm2")
	I0719 19:35:15.489983   68019 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 19:35:15.490011   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:35:15.490360   68019 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 19:35:15.490390   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:15.493498   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.493895   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.493921   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.494033   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:15.494190   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.494355   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:15.494522   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:35:15.575024   68019 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 19:35:15.579078   68019 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 19:35:15.579102   68019 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 19:35:15.579180   68019 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 19:35:15.579269   68019 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 19:35:15.579483   68019 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 19:35:15.589240   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:35:15.612071   68019 start.go:296] duration metric: took 122.087671ms for postStartSetup
	I0719 19:35:15.612115   68019 fix.go:56] duration metric: took 20.143554276s for fixHost
	I0719 19:35:15.612139   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:15.614679   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.614969   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.614994   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.615141   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:15.615362   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.615517   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.615645   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:15.615872   68019 main.go:141] libmachine: Using SSH client type: native
	I0719 19:35:15.616088   68019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.119 22 <nil> <nil>}
	I0719 19:35:15.616103   68019 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 19:35:15.712393   68019 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721417715.673203772
	
	I0719 19:35:15.712417   68019 fix.go:216] guest clock: 1721417715.673203772
	I0719 19:35:15.712430   68019 fix.go:229] Guest: 2024-07-19 19:35:15.673203772 +0000 UTC Remote: 2024-07-19 19:35:15.612120403 +0000 UTC m=+357.807790755 (delta=61.083369ms)
	I0719 19:35:15.712454   68019 fix.go:200] guest clock delta is within tolerance: 61.083369ms
	I0719 19:35:15.712468   68019 start.go:83] releasing machines lock for "no-preload-971041", held for 20.243943716s
	I0719 19:35:15.712492   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:35:15.712798   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetIP
	I0719 19:35:15.715651   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.716143   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.716167   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.716435   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:35:15.717039   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:35:15.717281   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:35:15.717381   68019 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 19:35:15.717428   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:15.717554   68019 ssh_runner.go:195] Run: cat /version.json
	I0719 19:35:15.717588   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:15.720120   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.720461   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.720489   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.720510   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.720731   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:15.720860   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.720891   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.720912   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.721167   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:15.721179   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:15.721365   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:35:15.721419   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.721559   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:15.721711   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:35:15.834816   68019 ssh_runner.go:195] Run: systemctl --version
	I0719 19:35:15.841452   68019 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 19:35:15.982250   68019 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 19:35:15.989060   68019 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 19:35:15.989135   68019 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 19:35:16.003901   68019 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 19:35:16.003926   68019 start.go:495] detecting cgroup driver to use...
	I0719 19:35:16.003995   68019 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 19:35:16.023680   68019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 19:35:16.038740   68019 docker.go:217] disabling cri-docker service (if available) ...
	I0719 19:35:16.038818   68019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 19:35:16.053907   68019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 19:35:16.067484   68019 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 19:35:16.186006   68019 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 19:35:16.339983   68019 docker.go:233] disabling docker service ...
	I0719 19:35:16.340044   68019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 19:35:16.353948   68019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 19:35:16.367479   68019 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 19:35:16.513192   68019 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 19:35:16.642277   68019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 19:35:16.656031   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 19:35:16.674529   68019 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0719 19:35:16.674606   68019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.684907   68019 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 19:35:16.684974   68019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.695542   68019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.706406   68019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.716477   68019 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 19:35:16.726989   68019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.736973   68019 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.753617   68019 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.764022   68019 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 19:35:16.772871   68019 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 19:35:16.772919   68019 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 19:35:16.785870   68019 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 19:35:16.794887   68019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:35:16.942160   68019 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 19:35:17.084983   68019 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 19:35:17.085048   68019 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 19:35:17.089799   68019 start.go:563] Will wait 60s for crictl version
	I0719 19:35:17.089866   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.093813   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 19:35:17.130312   68019 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 19:35:17.130390   68019 ssh_runner.go:195] Run: crio --version
	I0719 19:35:17.157666   68019 ssh_runner.go:195] Run: crio --version
	I0719 19:35:17.186672   68019 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0719 19:35:17.188084   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetIP
	I0719 19:35:17.190975   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:17.191340   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:17.191369   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:17.191561   68019 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0719 19:35:17.195580   68019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:35:17.207427   68019 kubeadm.go:883] updating cluster {Name:no-preload-971041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-971041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.119 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 19:35:17.207577   68019 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0719 19:35:17.207631   68019 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:35:17.243262   68019 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0719 19:35:17.243288   68019 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 19:35:17.243386   68019 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:17.243412   68019 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 19:35:17.243417   68019 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0719 19:35:17.243423   68019 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 19:35:17.243448   68019 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 19:35:17.243394   68019 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 19:35:17.243395   68019 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0719 19:35:17.243493   68019 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 19:35:17.245104   68019 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0719 19:35:17.245131   68019 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0719 19:35:17.245106   68019 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 19:35:17.245116   68019 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:17.245179   68019 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 19:35:17.245186   68019 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 19:35:17.245375   68019 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 19:35:17.245546   68019 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 19:35:17.462701   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 19:35:17.480663   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0719 19:35:17.488510   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 19:35:17.491593   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 19:35:17.501505   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0719 19:35:17.502452   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 19:35:17.506436   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0719 19:35:17.522704   68019 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0719 19:35:17.522760   68019 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 19:35:17.522811   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.659364   68019 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0719 19:35:17.659399   68019 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 19:35:17.659429   68019 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0719 19:35:17.659451   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.659462   68019 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 19:35:17.659479   68019 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0719 19:35:17.659515   68019 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 19:35:17.659519   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.659550   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.659591   68019 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0719 19:35:17.659549   68019 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0719 19:35:17.659619   68019 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0719 19:35:17.659627   68019 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 19:35:17.659631   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 19:35:17.659647   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.659658   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.670058   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0719 19:35:17.670089   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0719 19:35:17.671989   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 19:35:17.672178   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 19:35:17.672292   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 19:35:17.738069   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0719 19:35:17.738169   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0719 19:35:17.789982   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0719 19:35:17.790070   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0719 19:35:17.790088   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0719 19:35:17.790163   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0719 19:35:17.790104   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0719 19:35:17.790187   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0719 19:35:17.791690   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0719 19:35:17.791731   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0719 19:35:17.791748   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0719 19:35:17.791761   68019 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0719 19:35:17.791777   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0719 19:35:17.791798   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0719 19:35:17.791810   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0719 19:35:17.799893   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0719 19:35:17.800102   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0719 19:35:17.800410   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0719 19:35:14.467021   68536 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:16.467879   68536 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:18.234926   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:20.733492   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:17.119366   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:17.619359   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:18.118510   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:18.618899   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:19.119102   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:19.619423   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:20.119089   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:20.618533   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:21.118809   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:21.618616   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:18.089182   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:19.785577   68019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.993744788s)
	I0719 19:35:19.785607   68019 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.99381007s)
	I0719 19:35:19.785615   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0719 19:35:19.785628   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0719 19:35:19.785634   68019 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0719 19:35:19.785654   68019 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.993829277s)
	I0719 19:35:19.785664   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0719 19:35:19.785684   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0719 19:35:19.785741   68019 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.696527431s)
	I0719 19:35:19.785825   68019 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0719 19:35:19.785866   68019 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:19.785910   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:19.790112   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:22.178743   68019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.393030453s)
	I0719 19:35:22.178773   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0719 19:35:22.178783   68019 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0719 19:35:22.178800   68019 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.388665972s)
	I0719 19:35:22.178830   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0719 19:35:22.178848   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0719 19:35:22.178920   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0719 19:35:22.183029   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0719 19:35:18.968128   68536 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:21.467026   68536 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:21.467056   68536 pod_ready.go:81] duration metric: took 9.005780978s for pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:21.467069   68536 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cdb66" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:21.472259   68536 pod_ready.go:92] pod "kube-proxy-cdb66" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:21.472282   68536 pod_ready.go:81] duration metric: took 5.205132ms for pod "kube-proxy-cdb66" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:21.472299   68536 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:21.476400   68536 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:21.476420   68536 pod_ready.go:81] duration metric: took 4.112093ms for pod "kube-scheduler-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:21.476431   68536 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:22.733565   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:25.232147   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:22.119363   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:22.618892   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:23.118639   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:23.619386   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:24.118543   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:24.618558   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:25.118612   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:25.619231   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:26.118601   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:26.618755   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:25.462621   68019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.283764696s)
	I0719 19:35:25.462658   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0719 19:35:25.462676   68019 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0719 19:35:25.462728   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0719 19:35:26.824398   68019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.361644989s)
	I0719 19:35:26.824432   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0719 19:35:26.824443   68019 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0719 19:35:26.824492   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0719 19:35:23.484653   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:25.984229   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:27.233184   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:29.732740   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:27.118683   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:27.618797   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:28.118832   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:28.618508   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:29.118591   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:29.618542   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:30.118979   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:30.619506   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:31.118721   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:31.619394   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:28.791678   68019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.967160098s)
	I0719 19:35:28.791704   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0719 19:35:28.791716   68019 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0719 19:35:28.791769   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0719 19:35:30.745676   68019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.95388111s)
	I0719 19:35:30.745709   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0719 19:35:30.745721   68019 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0719 19:35:30.745769   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0719 19:35:31.391036   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0719 19:35:31.391080   68019 cache_images.go:123] Successfully loaded all cached images
	I0719 19:35:31.391085   68019 cache_images.go:92] duration metric: took 14.147757536s to LoadCachedImages
	I0719 19:35:31.391098   68019 kubeadm.go:934] updating node { 192.168.50.119 8443 v1.31.0-beta.0 crio true true} ...
	I0719 19:35:31.391207   68019 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-971041 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.119
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-971041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 19:35:31.391282   68019 ssh_runner.go:195] Run: crio config
	I0719 19:35:31.434562   68019 cni.go:84] Creating CNI manager for ""
	I0719 19:35:31.434584   68019 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:35:31.434594   68019 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 19:35:31.434617   68019 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.119 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-971041 NodeName:no-preload-971041 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.119"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.119 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 19:35:31.434745   68019 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.119
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-971041"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.119
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.119"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 19:35:31.434803   68019 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0719 19:35:31.446259   68019 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 19:35:31.446337   68019 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 19:35:31.456298   68019 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0719 19:35:31.473270   68019 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0719 19:35:31.490353   68019 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0719 19:35:31.506996   68019 ssh_runner.go:195] Run: grep 192.168.50.119	control-plane.minikube.internal$ /etc/hosts
	I0719 19:35:31.510538   68019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.119	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:35:31.522076   68019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:35:31.638008   68019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:35:31.654421   68019 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041 for IP: 192.168.50.119
	I0719 19:35:31.654443   68019 certs.go:194] generating shared ca certs ...
	I0719 19:35:31.654458   68019 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:35:31.654637   68019 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 19:35:31.654703   68019 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 19:35:31.654732   68019 certs.go:256] generating profile certs ...
	I0719 19:35:31.654844   68019 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/client.key
	I0719 19:35:31.654929   68019 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/apiserver.key.77bb0cd9
	I0719 19:35:31.654978   68019 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/proxy-client.key
	I0719 19:35:31.655112   68019 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 19:35:31.655148   68019 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 19:35:31.655161   68019 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 19:35:31.655204   68019 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 19:35:31.655234   68019 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 19:35:31.655274   68019 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 19:35:31.655325   68019 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:35:31.656172   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 19:35:31.683885   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 19:35:31.714681   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 19:35:31.742470   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 19:35:31.770348   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0719 19:35:31.804982   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 19:35:31.830750   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 19:35:31.857826   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 19:35:31.880913   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 19:35:31.903475   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 19:35:31.925060   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 19:35:31.948414   68019 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 19:35:31.964203   68019 ssh_runner.go:195] Run: openssl version
	I0719 19:35:31.969774   68019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 19:35:31.981478   68019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 19:35:31.986937   68019 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 19:35:31.987002   68019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 19:35:31.993471   68019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 19:35:32.004689   68019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 19:35:32.015199   68019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 19:35:32.019475   68019 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 19:35:32.019514   68019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 19:35:32.024962   68019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 19:35:32.034782   68019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 19:35:32.044867   68019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:35:32.048801   68019 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:35:32.048849   68019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:35:32.054461   68019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 19:35:32.064884   68019 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 19:35:32.069053   68019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 19:35:32.074547   68019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 19:35:32.080411   68019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 19:35:32.085765   68019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 19:35:32.090933   68019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 19:35:32.096183   68019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 19:35:32.101377   68019 kubeadm.go:392] StartCluster: {Name:no-preload-971041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-971041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.119 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:35:32.101485   68019 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 19:35:32.101523   68019 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:35:32.139740   68019 cri.go:89] found id: ""
	I0719 19:35:32.139796   68019 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 19:35:32.149576   68019 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 19:35:32.149595   68019 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 19:35:32.149640   68019 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 19:35:32.159486   68019 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 19:35:32.160857   68019 kubeconfig.go:125] found "no-preload-971041" server: "https://192.168.50.119:8443"
	I0719 19:35:32.163357   68019 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 19:35:32.173671   68019 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.119
	I0719 19:35:32.173710   68019 kubeadm.go:1160] stopping kube-system containers ...
	I0719 19:35:32.173723   68019 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 19:35:32.173767   68019 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:35:32.207095   68019 cri.go:89] found id: ""
	I0719 19:35:32.207172   68019 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 19:35:32.224068   68019 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:35:32.233552   68019 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:35:32.233571   68019 kubeadm.go:157] found existing configuration files:
	
	I0719 19:35:32.233623   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:35:32.241920   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:35:32.241971   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:35:32.250594   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:35:32.258910   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:35:32.258961   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:35:32.273812   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:35:32.282220   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:35:32.282277   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:35:32.290961   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:35:32.299396   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:35:32.299498   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:35:32.308343   68019 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:35:32.317592   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:32.421587   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:28.482817   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:30.982904   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:32.983517   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:32.231730   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:34.232127   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:32.118495   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:32.618922   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:33.119055   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:33.619189   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:34.118644   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:34.618987   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:35.119225   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:35.619447   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:36.118778   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:36.619327   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:33.447745   68019 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.026128232s)
	I0719 19:35:33.447770   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:33.662089   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:33.725683   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:33.792637   68019 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:35:33.792728   68019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:34.292798   68019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:34.793080   68019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:34.807168   68019 api_server.go:72] duration metric: took 1.014530984s to wait for apiserver process to appear ...
	I0719 19:35:34.807196   68019 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:35:34.807224   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:35:34.807771   68019 api_server.go:269] stopped: https://192.168.50.119:8443/healthz: Get "https://192.168.50.119:8443/healthz": dial tcp 192.168.50.119:8443: connect: connection refused
	I0719 19:35:35.308340   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:35:37.476017   68019 api_server.go:279] https://192.168.50.119:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:35:37.476054   68019 api_server.go:103] status: https://192.168.50.119:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:35:37.476070   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:35:37.518367   68019 api_server.go:279] https://192.168.50.119:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:35:37.518394   68019 api_server.go:103] status: https://192.168.50.119:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:35:37.807881   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:35:37.817233   68019 api_server.go:279] https://192.168.50.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:35:37.817256   68019 api_server.go:103] status: https://192.168.50.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:35:34.984368   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:37.482412   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:38.307564   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:35:38.312884   68019 api_server.go:279] https://192.168.50.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:35:38.312920   68019 api_server.go:103] status: https://192.168.50.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:35:38.807444   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:35:38.811634   68019 api_server.go:279] https://192.168.50.119:8443/healthz returned 200:
	ok
	I0719 19:35:38.818446   68019 api_server.go:141] control plane version: v1.31.0-beta.0
	I0719 19:35:38.818473   68019 api_server.go:131] duration metric: took 4.011270764s to wait for apiserver health ...
	I0719 19:35:38.818484   68019 cni.go:84] Creating CNI manager for ""
	I0719 19:35:38.818493   68019 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:35:38.820420   68019 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 19:35:36.733194   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:38.734341   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:41.232296   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:37.118504   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:37.619038   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:38.118793   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:38.619025   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:39.118668   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:39.619156   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:40.118595   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:40.618638   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:41.119418   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:41.618806   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:38.821795   68019 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 19:35:38.833825   68019 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 19:35:38.855799   68019 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:35:38.866571   68019 system_pods.go:59] 8 kube-system pods found
	I0719 19:35:38.866614   68019 system_pods.go:61] "coredns-5cfdc65f69-dzgdb" [1aca7daa-f2b9-41f9-ab4f-f6dcdf5a5655] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 19:35:38.866623   68019 system_pods.go:61] "etcd-no-preload-971041" [a2001218-3485-47a4-b7ba-bc5c41227b84] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 19:35:38.866632   68019 system_pods.go:61] "kube-apiserver-no-preload-971041" [f1a82c83-f8fc-44da-ba9c-558422a050cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 19:35:38.866641   68019 system_pods.go:61] "kube-controller-manager-no-preload-971041" [d760c600-3d53-436f-80b1-43f97c8a1d97] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 19:35:38.866649   68019 system_pods.go:61] "kube-proxy-xnwc9" [f709e007-bc70-4aed-a66c-c7a8218df4c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0719 19:35:38.866661   68019 system_pods.go:61] "kube-scheduler-no-preload-971041" [0751f461-931d-4ae9-bba2-393487903f77] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0719 19:35:38.866671   68019 system_pods.go:61] "metrics-server-78fcd8795b-j78vm" [dd7ef76a-38b2-4660-88ce-a4dbe6bdd43e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:35:38.866683   68019 system_pods.go:61] "storage-provisioner" [68333fba-f647-4ddc-81a1-c514426efb1c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 19:35:38.866690   68019 system_pods.go:74] duration metric: took 10.871899ms to wait for pod list to return data ...
	I0719 19:35:38.866700   68019 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:35:38.871059   68019 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:35:38.871085   68019 node_conditions.go:123] node cpu capacity is 2
	I0719 19:35:38.871096   68019 node_conditions.go:105] duration metric: took 4.388431ms to run NodePressure ...
	I0719 19:35:38.871117   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:39.202189   68019 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 19:35:39.208833   68019 kubeadm.go:739] kubelet initialised
	I0719 19:35:39.208852   68019 kubeadm.go:740] duration metric: took 6.63993ms waiting for restarted kubelet to initialise ...
	I0719 19:35:39.208860   68019 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:35:39.213105   68019 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-dzgdb" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:41.221061   68019 pod_ready.go:102] pod "coredns-5cfdc65f69-dzgdb" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:39.482629   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:41.484400   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:43.732393   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:46.230839   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:42.118791   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:42.619397   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:43.119214   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:43.619520   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:44.118776   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:44.618539   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:45.118556   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:45.619338   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:46.119336   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:46.618522   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:43.221724   68019 pod_ready.go:102] pod "coredns-5cfdc65f69-dzgdb" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:44.720005   68019 pod_ready.go:92] pod "coredns-5cfdc65f69-dzgdb" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:44.720027   68019 pod_ready.go:81] duration metric: took 5.506899416s for pod "coredns-5cfdc65f69-dzgdb" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:44.720036   68019 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:46.725895   68019 pod_ready.go:102] pod "etcd-no-preload-971041" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:43.983502   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:46.482945   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:48.232409   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:50.232711   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:47.119180   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:47.618740   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:48.118755   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:48.618715   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:49.118497   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:49.619062   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:50.118758   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:50.618820   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:51.119238   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:51.618777   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:48.726634   68019 pod_ready.go:102] pod "etcd-no-preload-971041" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:50.726838   68019 pod_ready.go:102] pod "etcd-no-preload-971041" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:51.727136   68019 pod_ready.go:92] pod "etcd-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:51.727157   68019 pod_ready.go:81] duration metric: took 7.007114402s for pod "etcd-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.727166   68019 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.731799   68019 pod_ready.go:92] pod "kube-apiserver-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:51.731826   68019 pod_ready.go:81] duration metric: took 4.653065ms for pod "kube-apiserver-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.731868   68019 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.736589   68019 pod_ready.go:92] pod "kube-controller-manager-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:51.736609   68019 pod_ready.go:81] duration metric: took 4.730944ms for pod "kube-controller-manager-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.736622   68019 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xnwc9" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.740944   68019 pod_ready.go:92] pod "kube-proxy-xnwc9" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:51.740969   68019 pod_ready.go:81] duration metric: took 4.339031ms for pod "kube-proxy-xnwc9" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.740990   68019 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.744978   68019 pod_ready.go:92] pod "kube-scheduler-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:51.744996   68019 pod_ready.go:81] duration metric: took 3.998956ms for pod "kube-scheduler-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.745004   68019 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:48.983216   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:50.983267   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:52.234034   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:54.234538   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:52.119537   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:52.618848   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:53.118706   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:53.618985   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:54.119456   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:54.618507   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:55.118511   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:55.618721   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:56.118586   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:56.618540   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:53.751346   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:56.252649   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:53.482767   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:55.982603   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:56.732054   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:59.231910   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:57.118852   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:57.619296   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:58.118571   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:58.619133   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:59.119529   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:59.618552   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:00.118563   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:00.618818   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:01.118545   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:01.619292   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:58.751344   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:00.751687   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:58.482346   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:00.983025   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:02.983499   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:01.731466   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:03.732078   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:06.232464   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:02.118552   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:02.618759   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:03.119280   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:03.619453   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:04.119391   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:04.619016   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:05.118566   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:05.118653   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:05.157126   68995 cri.go:89] found id: ""
	I0719 19:36:05.157154   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.157163   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:05.157169   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:05.157218   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:05.190115   68995 cri.go:89] found id: ""
	I0719 19:36:05.190143   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.190153   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:05.190161   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:05.190218   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:05.222515   68995 cri.go:89] found id: ""
	I0719 19:36:05.222542   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.222550   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:05.222559   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:05.222609   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:05.264077   68995 cri.go:89] found id: ""
	I0719 19:36:05.264096   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.264103   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:05.264108   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:05.264180   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:05.305227   68995 cri.go:89] found id: ""
	I0719 19:36:05.305260   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.305271   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:05.305278   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:05.305365   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:05.346841   68995 cri.go:89] found id: ""
	I0719 19:36:05.346870   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.346883   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:05.346893   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:05.346948   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:05.386787   68995 cri.go:89] found id: ""
	I0719 19:36:05.386816   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.386826   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:05.386832   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:05.386888   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:05.425126   68995 cri.go:89] found id: ""
	I0719 19:36:05.425151   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.425161   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:05.425172   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:05.425187   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:05.476260   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:05.476291   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:05.489614   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:05.489639   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:05.616060   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:05.616082   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:05.616095   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:05.680089   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:05.680121   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:03.251437   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:05.253289   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:07.751972   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:05.483082   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:07.983293   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:08.233421   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:10.731139   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:08.219734   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:08.234371   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:08.234437   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:08.279970   68995 cri.go:89] found id: ""
	I0719 19:36:08.279998   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.280009   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:08.280016   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:08.280068   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:08.315997   68995 cri.go:89] found id: ""
	I0719 19:36:08.316026   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.316037   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:08.316044   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:08.316108   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:08.350044   68995 cri.go:89] found id: ""
	I0719 19:36:08.350073   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.350082   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:08.350089   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:08.350149   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:08.407460   68995 cri.go:89] found id: ""
	I0719 19:36:08.407482   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.407494   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:08.407501   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:08.407559   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:08.442445   68995 cri.go:89] found id: ""
	I0719 19:36:08.442476   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.442486   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:08.442493   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:08.442564   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:08.478149   68995 cri.go:89] found id: ""
	I0719 19:36:08.478178   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.478193   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:08.478200   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:08.478261   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:08.523968   68995 cri.go:89] found id: ""
	I0719 19:36:08.523996   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.524003   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:08.524009   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:08.524083   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:08.558285   68995 cri.go:89] found id: ""
	I0719 19:36:08.558312   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.558322   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:08.558332   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:08.558347   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:08.570518   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:08.570544   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:08.642837   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:08.642861   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:08.642874   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:08.716865   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:08.716905   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:08.760499   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:08.760530   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:11.312231   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:11.326565   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:11.326640   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:11.359787   68995 cri.go:89] found id: ""
	I0719 19:36:11.359813   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.359823   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:11.359829   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:11.359905   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:11.392541   68995 cri.go:89] found id: ""
	I0719 19:36:11.392565   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.392573   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:11.392578   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:11.392626   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:11.426313   68995 cri.go:89] found id: ""
	I0719 19:36:11.426340   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.426357   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:11.426363   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:11.426426   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:11.463138   68995 cri.go:89] found id: ""
	I0719 19:36:11.463164   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.463174   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:11.463181   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:11.463242   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:11.496506   68995 cri.go:89] found id: ""
	I0719 19:36:11.496531   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.496542   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:11.496549   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:11.496610   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:11.533917   68995 cri.go:89] found id: ""
	I0719 19:36:11.533947   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.533960   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:11.533969   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:11.534048   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:11.567245   68995 cri.go:89] found id: ""
	I0719 19:36:11.567268   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.567277   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:11.567283   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:11.567332   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:11.609340   68995 cri.go:89] found id: ""
	I0719 19:36:11.609365   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.609376   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:11.609386   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:11.609400   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:11.659918   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:11.659953   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:11.673338   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:11.673371   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:11.745829   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:11.745854   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:11.745868   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:11.816860   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:11.816897   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:10.251374   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:12.252231   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:09.983472   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:12.482718   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:12.732496   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:15.231796   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:14.357653   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:14.371820   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:14.371900   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:14.408447   68995 cri.go:89] found id: ""
	I0719 19:36:14.408478   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.408488   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:14.408512   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:14.408583   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:14.443253   68995 cri.go:89] found id: ""
	I0719 19:36:14.443283   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.443319   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:14.443329   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:14.443403   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:14.476705   68995 cri.go:89] found id: ""
	I0719 19:36:14.476730   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.476739   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:14.476746   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:14.476806   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:14.508251   68995 cri.go:89] found id: ""
	I0719 19:36:14.508276   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.508286   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:14.508294   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:14.508365   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:14.540822   68995 cri.go:89] found id: ""
	I0719 19:36:14.540850   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.540860   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:14.540868   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:14.540930   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:14.574264   68995 cri.go:89] found id: ""
	I0719 19:36:14.574294   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.574320   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:14.574329   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:14.574409   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:14.606520   68995 cri.go:89] found id: ""
	I0719 19:36:14.606553   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.606565   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:14.606573   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:14.606646   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:14.640693   68995 cri.go:89] found id: ""
	I0719 19:36:14.640722   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.640731   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:14.640743   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:14.640757   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:14.697534   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:14.697575   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:14.710323   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:14.710365   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:14.781478   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:14.781501   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:14.781516   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:14.887498   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:14.887535   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:14.750464   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:16.751420   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:14.483133   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:16.982943   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:17.732205   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:19.733051   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:17.427726   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:17.441950   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:17.442041   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:17.475370   68995 cri.go:89] found id: ""
	I0719 19:36:17.475398   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.475408   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:17.475415   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:17.475474   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:17.508258   68995 cri.go:89] found id: ""
	I0719 19:36:17.508289   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.508307   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:17.508315   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:17.508380   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:17.542625   68995 cri.go:89] found id: ""
	I0719 19:36:17.542656   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.542668   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:17.542676   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:17.542750   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:17.583168   68995 cri.go:89] found id: ""
	I0719 19:36:17.583199   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.583210   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:17.583218   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:17.583280   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:17.619339   68995 cri.go:89] found id: ""
	I0719 19:36:17.619369   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.619380   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:17.619386   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:17.619438   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:17.653194   68995 cri.go:89] found id: ""
	I0719 19:36:17.653219   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.653227   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:17.653233   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:17.653289   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:17.689086   68995 cri.go:89] found id: ""
	I0719 19:36:17.689115   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.689123   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:17.689129   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:17.689245   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:17.723079   68995 cri.go:89] found id: ""
	I0719 19:36:17.723108   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.723115   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:17.723124   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:17.723148   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:17.736805   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:17.736829   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:17.808197   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:17.808219   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:17.808251   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:17.893127   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:17.893173   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:17.936617   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:17.936653   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:20.489347   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:20.502806   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:20.502884   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:20.543791   68995 cri.go:89] found id: ""
	I0719 19:36:20.543822   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.543846   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:20.543855   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:20.543924   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:20.582638   68995 cri.go:89] found id: ""
	I0719 19:36:20.582666   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.582678   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:20.582685   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:20.582742   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:20.632150   68995 cri.go:89] found id: ""
	I0719 19:36:20.632179   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.632192   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:20.632200   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:20.632265   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:20.665934   68995 cri.go:89] found id: ""
	I0719 19:36:20.665963   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.665974   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:20.665981   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:20.666040   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:20.703637   68995 cri.go:89] found id: ""
	I0719 19:36:20.703668   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.703679   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:20.703686   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:20.703755   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:20.736467   68995 cri.go:89] found id: ""
	I0719 19:36:20.736496   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.736506   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:20.736514   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:20.736582   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:20.771313   68995 cri.go:89] found id: ""
	I0719 19:36:20.771337   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.771345   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:20.771351   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:20.771408   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:20.804915   68995 cri.go:89] found id: ""
	I0719 19:36:20.804938   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.804944   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:20.804953   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:20.804964   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:20.880189   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:20.880229   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:20.919653   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:20.919682   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:20.968588   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:20.968626   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:20.983781   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:20.983810   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:21.050261   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:18.752612   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:21.251202   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:19.483771   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:21.983390   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:22.232680   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:24.731208   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:23.550679   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:23.563483   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:23.563541   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:23.603919   68995 cri.go:89] found id: ""
	I0719 19:36:23.603950   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.603961   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:23.603969   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:23.604034   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:23.640354   68995 cri.go:89] found id: ""
	I0719 19:36:23.640377   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.640384   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:23.640390   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:23.640441   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:23.672609   68995 cri.go:89] found id: ""
	I0719 19:36:23.672644   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.672661   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:23.672668   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:23.672732   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:23.709547   68995 cri.go:89] found id: ""
	I0719 19:36:23.709577   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.709591   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:23.709598   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:23.709658   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:23.742474   68995 cri.go:89] found id: ""
	I0719 19:36:23.742498   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.742508   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:23.742514   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:23.742578   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:23.779339   68995 cri.go:89] found id: ""
	I0719 19:36:23.779372   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.779385   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:23.779393   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:23.779462   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:23.811984   68995 cri.go:89] found id: ""
	I0719 19:36:23.812010   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.812022   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:23.812031   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:23.812097   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:23.844692   68995 cri.go:89] found id: ""
	I0719 19:36:23.844716   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.844724   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:23.844733   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:23.844746   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:23.899688   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:23.899721   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:23.915771   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:23.915799   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:23.986509   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:23.986529   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:23.986543   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:24.065621   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:24.065653   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:26.602809   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:26.615541   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:26.615616   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:26.647648   68995 cri.go:89] found id: ""
	I0719 19:36:26.647671   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.647680   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:26.647685   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:26.647742   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:26.682383   68995 cri.go:89] found id: ""
	I0719 19:36:26.682409   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.682417   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:26.682423   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:26.682468   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:26.715374   68995 cri.go:89] found id: ""
	I0719 19:36:26.715398   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.715405   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:26.715411   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:26.715482   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:26.748636   68995 cri.go:89] found id: ""
	I0719 19:36:26.748663   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.748673   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:26.748680   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:26.748742   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:26.783186   68995 cri.go:89] found id: ""
	I0719 19:36:26.783218   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.783229   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:26.783236   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:26.783299   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:26.813344   68995 cri.go:89] found id: ""
	I0719 19:36:26.813370   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.813380   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:26.813386   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:26.813434   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:26.845054   68995 cri.go:89] found id: ""
	I0719 19:36:26.845083   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.845093   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:26.845100   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:26.845166   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:26.878589   68995 cri.go:89] found id: ""
	I0719 19:36:26.878614   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.878621   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:26.878629   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:26.878640   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:26.931251   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:26.931284   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:26.944193   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:26.944230   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 19:36:23.252710   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:25.257011   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:27.751677   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:24.482571   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:26.982654   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:26.732883   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:29.233481   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	W0719 19:36:27.011803   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:27.011826   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:27.011846   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:27.085331   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:27.085364   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:29.625633   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:29.639941   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:29.640002   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:29.673642   68995 cri.go:89] found id: ""
	I0719 19:36:29.673669   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.673678   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:29.673683   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:29.673744   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:29.710482   68995 cri.go:89] found id: ""
	I0719 19:36:29.710507   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.710515   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:29.710521   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:29.710571   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:29.746719   68995 cri.go:89] found id: ""
	I0719 19:36:29.746744   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.746754   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:29.746761   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:29.746823   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:29.783230   68995 cri.go:89] found id: ""
	I0719 19:36:29.783256   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.783264   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:29.783269   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:29.783318   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:29.817944   68995 cri.go:89] found id: ""
	I0719 19:36:29.817971   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.817980   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:29.817986   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:29.818049   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:29.851277   68995 cri.go:89] found id: ""
	I0719 19:36:29.851305   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.851315   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:29.851328   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:29.851387   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:29.884476   68995 cri.go:89] found id: ""
	I0719 19:36:29.884502   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.884510   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:29.884516   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:29.884566   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:29.916564   68995 cri.go:89] found id: ""
	I0719 19:36:29.916592   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.916600   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:29.916608   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:29.916620   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:29.967970   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:29.968009   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:29.981767   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:29.981792   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:30.046914   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:30.046939   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:30.046953   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:30.125610   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:30.125644   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:29.753033   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:32.251239   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:29.483807   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:31.982677   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:31.732069   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:34.231495   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:36.232914   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:32.661765   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:32.676274   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:32.676354   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:32.712439   68995 cri.go:89] found id: ""
	I0719 19:36:32.712460   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.712468   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:32.712474   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:32.712534   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:32.745933   68995 cri.go:89] found id: ""
	I0719 19:36:32.745956   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.745965   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:32.745972   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:32.746035   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:32.778900   68995 cri.go:89] found id: ""
	I0719 19:36:32.778920   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.778927   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:32.778932   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:32.778988   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:32.809927   68995 cri.go:89] found id: ""
	I0719 19:36:32.809956   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.809966   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:32.809979   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:32.810038   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:32.840388   68995 cri.go:89] found id: ""
	I0719 19:36:32.840409   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.840417   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:32.840422   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:32.840472   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:32.873390   68995 cri.go:89] found id: ""
	I0719 19:36:32.873419   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.873428   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:32.873436   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:32.873499   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:32.909278   68995 cri.go:89] found id: ""
	I0719 19:36:32.909304   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.909311   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:32.909317   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:32.909374   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:32.941310   68995 cri.go:89] found id: ""
	I0719 19:36:32.941339   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.941346   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:32.941358   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:32.941370   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:32.992848   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:32.992875   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:33.006272   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:33.006305   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:33.068323   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:33.068358   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:33.068372   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:33.148030   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:33.148065   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:35.684957   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:35.697753   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:35.697823   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:35.733470   68995 cri.go:89] found id: ""
	I0719 19:36:35.733492   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.733500   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:35.733508   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:35.733565   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:35.766009   68995 cri.go:89] found id: ""
	I0719 19:36:35.766029   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.766037   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:35.766047   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:35.766104   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:35.799205   68995 cri.go:89] found id: ""
	I0719 19:36:35.799228   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.799238   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:35.799245   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:35.799303   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:35.834102   68995 cri.go:89] found id: ""
	I0719 19:36:35.834129   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.834139   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:35.834147   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:35.834217   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:35.867545   68995 cri.go:89] found id: ""
	I0719 19:36:35.867578   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.867588   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:35.867597   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:35.867653   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:35.902804   68995 cri.go:89] found id: ""
	I0719 19:36:35.902833   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.902856   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:35.902862   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:35.902917   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:35.935798   68995 cri.go:89] found id: ""
	I0719 19:36:35.935826   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.935851   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:35.935858   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:35.935918   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:35.968992   68995 cri.go:89] found id: ""
	I0719 19:36:35.969026   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.969037   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:35.969049   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:35.969068   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:36.054017   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:36.054050   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:36.094850   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:36.094875   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:36.145996   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:36.146030   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:36.159520   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:36.159549   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:36.227943   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:34.751556   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:36.754350   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:34.483433   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:36.982914   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:38.732952   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:41.231720   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:38.728958   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:38.742662   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:38.742727   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:38.779815   68995 cri.go:89] found id: ""
	I0719 19:36:38.779856   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.779867   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:38.779874   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:38.779938   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:38.814393   68995 cri.go:89] found id: ""
	I0719 19:36:38.814418   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.814428   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:38.814435   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:38.814496   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:38.848480   68995 cri.go:89] found id: ""
	I0719 19:36:38.848501   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.848508   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:38.848513   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:38.848565   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:38.883737   68995 cri.go:89] found id: ""
	I0719 19:36:38.883787   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.883796   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:38.883802   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:38.883887   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:38.921331   68995 cri.go:89] found id: ""
	I0719 19:36:38.921365   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.921376   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:38.921384   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:38.921432   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:38.957590   68995 cri.go:89] found id: ""
	I0719 19:36:38.957617   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.957625   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:38.957630   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:38.957689   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:38.992056   68995 cri.go:89] found id: ""
	I0719 19:36:38.992081   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.992091   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:38.992099   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:38.992162   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:39.026137   68995 cri.go:89] found id: ""
	I0719 19:36:39.026165   68995 logs.go:276] 0 containers: []
	W0719 19:36:39.026173   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:39.026181   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:39.026191   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:39.104975   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:39.105016   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:39.149479   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:39.149511   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:39.200003   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:39.200033   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:39.213690   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:39.213720   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:39.283772   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:41.784814   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:41.798307   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:41.798388   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:41.835130   68995 cri.go:89] found id: ""
	I0719 19:36:41.835160   68995 logs.go:276] 0 containers: []
	W0719 19:36:41.835181   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:41.835190   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:41.835248   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:41.871655   68995 cri.go:89] found id: ""
	I0719 19:36:41.871686   68995 logs.go:276] 0 containers: []
	W0719 19:36:41.871696   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:41.871702   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:41.871748   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:41.905291   68995 cri.go:89] found id: ""
	I0719 19:36:41.905321   68995 logs.go:276] 0 containers: []
	W0719 19:36:41.905332   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:41.905338   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:41.905387   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:41.938786   68995 cri.go:89] found id: ""
	I0719 19:36:41.938820   68995 logs.go:276] 0 containers: []
	W0719 19:36:41.938831   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:41.938839   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:41.938900   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:41.973146   68995 cri.go:89] found id: ""
	I0719 19:36:41.973170   68995 logs.go:276] 0 containers: []
	W0719 19:36:41.973178   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:41.973183   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:41.973230   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:39.251794   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:41.251895   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:39.483191   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:41.983709   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:43.233443   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:45.731442   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:42.007654   68995 cri.go:89] found id: ""
	I0719 19:36:42.007692   68995 logs.go:276] 0 containers: []
	W0719 19:36:42.007703   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:42.007713   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:42.007771   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:42.043419   68995 cri.go:89] found id: ""
	I0719 19:36:42.043454   68995 logs.go:276] 0 containers: []
	W0719 19:36:42.043465   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:42.043472   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:42.043550   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:42.079058   68995 cri.go:89] found id: ""
	I0719 19:36:42.079082   68995 logs.go:276] 0 containers: []
	W0719 19:36:42.079093   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:42.079106   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:42.079119   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:42.133699   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:42.133731   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:42.149143   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:42.149178   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:42.219380   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:42.219476   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:42.219493   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:42.300052   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:42.300089   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:44.837865   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:44.851088   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:44.851160   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:44.887593   68995 cri.go:89] found id: ""
	I0719 19:36:44.887625   68995 logs.go:276] 0 containers: []
	W0719 19:36:44.887636   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:44.887644   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:44.887704   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:44.923487   68995 cri.go:89] found id: ""
	I0719 19:36:44.923518   68995 logs.go:276] 0 containers: []
	W0719 19:36:44.923528   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:44.923535   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:44.923588   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:44.958530   68995 cri.go:89] found id: ""
	I0719 19:36:44.958552   68995 logs.go:276] 0 containers: []
	W0719 19:36:44.958560   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:44.958565   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:44.958612   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:44.993148   68995 cri.go:89] found id: ""
	I0719 19:36:44.993174   68995 logs.go:276] 0 containers: []
	W0719 19:36:44.993184   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:44.993191   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:44.993261   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:45.024580   68995 cri.go:89] found id: ""
	I0719 19:36:45.024608   68995 logs.go:276] 0 containers: []
	W0719 19:36:45.024619   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:45.024627   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:45.024687   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:45.061929   68995 cri.go:89] found id: ""
	I0719 19:36:45.061954   68995 logs.go:276] 0 containers: []
	W0719 19:36:45.061965   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:45.061974   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:45.062029   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:45.093445   68995 cri.go:89] found id: ""
	I0719 19:36:45.093473   68995 logs.go:276] 0 containers: []
	W0719 19:36:45.093482   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:45.093490   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:45.093549   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:45.129804   68995 cri.go:89] found id: ""
	I0719 19:36:45.129838   68995 logs.go:276] 0 containers: []
	W0719 19:36:45.129848   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:45.129860   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:45.129874   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:45.184333   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:45.184373   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:45.197516   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:45.197545   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:45.267374   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:45.267394   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:45.267411   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:45.343359   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:45.343400   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:43.751901   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:45.752529   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:44.482460   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:46.483629   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:48.231760   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:50.232025   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:47.880247   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:47.892644   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:47.892716   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:47.924838   68995 cri.go:89] found id: ""
	I0719 19:36:47.924866   68995 logs.go:276] 0 containers: []
	W0719 19:36:47.924874   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:47.924880   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:47.924936   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:47.956920   68995 cri.go:89] found id: ""
	I0719 19:36:47.956946   68995 logs.go:276] 0 containers: []
	W0719 19:36:47.956954   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:47.956960   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:47.957021   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:47.992589   68995 cri.go:89] found id: ""
	I0719 19:36:47.992614   68995 logs.go:276] 0 containers: []
	W0719 19:36:47.992622   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:47.992627   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:47.992681   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:48.028222   68995 cri.go:89] found id: ""
	I0719 19:36:48.028251   68995 logs.go:276] 0 containers: []
	W0719 19:36:48.028262   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:48.028270   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:48.028343   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:48.062122   68995 cri.go:89] found id: ""
	I0719 19:36:48.062144   68995 logs.go:276] 0 containers: []
	W0719 19:36:48.062152   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:48.062157   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:48.062202   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:48.095189   68995 cri.go:89] found id: ""
	I0719 19:36:48.095212   68995 logs.go:276] 0 containers: []
	W0719 19:36:48.095220   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:48.095226   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:48.095273   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:48.127726   68995 cri.go:89] found id: ""
	I0719 19:36:48.127756   68995 logs.go:276] 0 containers: []
	W0719 19:36:48.127764   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:48.127769   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:48.127820   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:48.160445   68995 cri.go:89] found id: ""
	I0719 19:36:48.160473   68995 logs.go:276] 0 containers: []
	W0719 19:36:48.160481   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:48.160490   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:48.160501   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:48.214196   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:48.214228   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:48.227942   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:48.227967   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:48.302651   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:48.302671   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:48.302684   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:48.382283   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:48.382325   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:50.921030   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:50.934783   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:50.934859   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:50.970236   68995 cri.go:89] found id: ""
	I0719 19:36:50.970268   68995 logs.go:276] 0 containers: []
	W0719 19:36:50.970282   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:50.970290   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:50.970349   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:51.003636   68995 cri.go:89] found id: ""
	I0719 19:36:51.003662   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.003669   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:51.003675   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:51.003735   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:51.035390   68995 cri.go:89] found id: ""
	I0719 19:36:51.035417   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.035425   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:51.035432   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:51.035496   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:51.069118   68995 cri.go:89] found id: ""
	I0719 19:36:51.069140   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.069148   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:51.069154   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:51.069207   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:51.102433   68995 cri.go:89] found id: ""
	I0719 19:36:51.102465   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.102476   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:51.102483   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:51.102547   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:51.135711   68995 cri.go:89] found id: ""
	I0719 19:36:51.135743   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.135755   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:51.135764   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:51.135856   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:51.168041   68995 cri.go:89] found id: ""
	I0719 19:36:51.168073   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.168083   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:51.168091   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:51.168153   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:51.205637   68995 cri.go:89] found id: ""
	I0719 19:36:51.205667   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.205674   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:51.205683   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:51.205696   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:51.245168   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:51.245197   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:51.295810   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:51.295863   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:51.308791   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:51.308813   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:51.385214   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:51.385239   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:51.385254   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:48.251909   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:50.750650   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:52.751426   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:48.983311   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:50.984588   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:52.232203   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:54.731934   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:53.959019   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:53.971044   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:53.971118   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:54.004971   68995 cri.go:89] found id: ""
	I0719 19:36:54.004996   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.005007   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:54.005014   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:54.005074   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:54.035794   68995 cri.go:89] found id: ""
	I0719 19:36:54.035820   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.035828   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:54.035849   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:54.035909   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:54.067783   68995 cri.go:89] found id: ""
	I0719 19:36:54.067811   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.067821   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:54.067828   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:54.067905   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:54.107403   68995 cri.go:89] found id: ""
	I0719 19:36:54.107424   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.107432   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:54.107437   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:54.107486   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:54.144411   68995 cri.go:89] found id: ""
	I0719 19:36:54.144434   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.144443   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:54.144448   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:54.144507   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:54.180416   68995 cri.go:89] found id: ""
	I0719 19:36:54.180444   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.180453   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:54.180459   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:54.180504   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:54.216444   68995 cri.go:89] found id: ""
	I0719 19:36:54.216476   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.216484   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:54.216489   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:54.216536   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:54.252652   68995 cri.go:89] found id: ""
	I0719 19:36:54.252672   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.252679   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:54.252686   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:54.252697   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:54.265888   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:54.265920   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:54.337271   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:54.337300   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:54.337316   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:54.414882   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:54.414916   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:54.452363   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:54.452387   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:55.252673   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:57.752884   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:53.483020   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:55.982973   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:57.983073   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:56.732297   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:58.733339   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:01.233006   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:57.004491   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:57.017581   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:57.017662   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:57.051318   68995 cri.go:89] found id: ""
	I0719 19:36:57.051347   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.051355   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:57.051362   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:57.051414   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:57.091361   68995 cri.go:89] found id: ""
	I0719 19:36:57.091386   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.091394   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:57.091399   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:57.091454   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:57.125970   68995 cri.go:89] found id: ""
	I0719 19:36:57.125998   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.126009   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:57.126016   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:57.126078   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:57.162767   68995 cri.go:89] found id: ""
	I0719 19:36:57.162794   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.162804   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:57.162811   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:57.162868   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:57.199712   68995 cri.go:89] found id: ""
	I0719 19:36:57.199742   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.199763   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:57.199770   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:57.199882   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:57.234782   68995 cri.go:89] found id: ""
	I0719 19:36:57.234812   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.234823   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:57.234830   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:57.234889   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:57.269789   68995 cri.go:89] found id: ""
	I0719 19:36:57.269817   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.269827   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:57.269835   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:57.269895   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:57.303323   68995 cri.go:89] found id: ""
	I0719 19:36:57.303350   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.303361   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:57.303371   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:57.303385   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:57.353434   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:57.353469   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:57.365982   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:57.366007   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:57.433767   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:57.433787   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:57.433801   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:57.514332   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:57.514372   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:00.053832   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:00.067665   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:00.067733   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:00.102502   68995 cri.go:89] found id: ""
	I0719 19:37:00.102529   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.102539   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:00.102547   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:00.102605   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:00.155051   68995 cri.go:89] found id: ""
	I0719 19:37:00.155088   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.155101   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:00.155109   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:00.155245   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:00.189002   68995 cri.go:89] found id: ""
	I0719 19:37:00.189032   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.189042   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:00.189049   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:00.189113   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:00.222057   68995 cri.go:89] found id: ""
	I0719 19:37:00.222083   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.222091   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:00.222097   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:00.222278   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:00.256815   68995 cri.go:89] found id: ""
	I0719 19:37:00.256839   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.256847   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:00.256853   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:00.256907   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:00.288710   68995 cri.go:89] found id: ""
	I0719 19:37:00.288742   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.288750   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:00.288756   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:00.288803   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:00.325994   68995 cri.go:89] found id: ""
	I0719 19:37:00.326019   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.326028   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:00.326034   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:00.326090   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:00.360401   68995 cri.go:89] found id: ""
	I0719 19:37:00.360434   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.360441   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:00.360450   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:00.360462   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:00.373448   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:00.373479   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:00.443796   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:00.443812   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:00.443825   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:00.529738   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:00.529777   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:00.571340   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:00.571367   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:00.251762   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:02.750698   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:59.984278   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:02.482260   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:03.731540   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:05.731921   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:03.121961   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:03.134075   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:03.134472   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:03.168000   68995 cri.go:89] found id: ""
	I0719 19:37:03.168033   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.168044   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:03.168053   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:03.168118   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:03.200331   68995 cri.go:89] found id: ""
	I0719 19:37:03.200362   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.200369   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:03.200376   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:03.200437   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:03.232472   68995 cri.go:89] found id: ""
	I0719 19:37:03.232500   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.232510   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:03.232517   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:03.232567   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:03.265173   68995 cri.go:89] found id: ""
	I0719 19:37:03.265200   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.265208   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:03.265216   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:03.265268   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:03.298983   68995 cri.go:89] found id: ""
	I0719 19:37:03.299011   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.299022   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:03.299029   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:03.299081   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:03.337990   68995 cri.go:89] found id: ""
	I0719 19:37:03.338024   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.338038   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:03.338046   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:03.338111   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:03.371374   68995 cri.go:89] found id: ""
	I0719 19:37:03.371402   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.371410   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:03.371415   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:03.371472   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:03.403890   68995 cri.go:89] found id: ""
	I0719 19:37:03.403915   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.403928   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:03.403939   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:03.403954   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:03.478826   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:03.478864   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:03.519208   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:03.519231   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:03.571603   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:03.571636   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:03.585154   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:03.585178   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:03.649156   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:06.149550   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:06.162598   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:06.162676   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:06.196635   68995 cri.go:89] found id: ""
	I0719 19:37:06.196681   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.196692   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:06.196701   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:06.196764   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:06.230855   68995 cri.go:89] found id: ""
	I0719 19:37:06.230882   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.230892   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:06.230899   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:06.230959   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:06.264715   68995 cri.go:89] found id: ""
	I0719 19:37:06.264754   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.264771   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:06.264779   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:06.264842   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:06.297725   68995 cri.go:89] found id: ""
	I0719 19:37:06.297751   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.297758   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:06.297765   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:06.297810   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:06.334943   68995 cri.go:89] found id: ""
	I0719 19:37:06.334970   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.334978   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:06.334984   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:06.335038   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:06.373681   68995 cri.go:89] found id: ""
	I0719 19:37:06.373716   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.373725   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:06.373730   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:06.373781   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:06.409528   68995 cri.go:89] found id: ""
	I0719 19:37:06.409554   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.409564   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:06.409609   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:06.409682   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:06.444636   68995 cri.go:89] found id: ""
	I0719 19:37:06.444671   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.444680   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:06.444715   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:06.444734   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:06.499678   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:06.499714   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:06.514790   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:06.514821   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:06.604576   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:06.604597   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:06.604615   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:06.690956   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:06.690989   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:04.752636   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:07.251643   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:04.484460   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:06.982794   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:08.231684   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:10.233049   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:09.228994   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:09.243444   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:09.243506   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:09.277208   68995 cri.go:89] found id: ""
	I0719 19:37:09.277229   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.277237   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:09.277247   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:09.277296   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:09.309077   68995 cri.go:89] found id: ""
	I0719 19:37:09.309102   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.309110   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:09.309115   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:09.309168   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:09.341212   68995 cri.go:89] found id: ""
	I0719 19:37:09.341247   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.341258   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:09.341265   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:09.341332   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:09.376373   68995 cri.go:89] found id: ""
	I0719 19:37:09.376400   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.376410   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:09.376416   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:09.376475   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:09.409175   68995 cri.go:89] found id: ""
	I0719 19:37:09.409204   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.409215   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:09.409222   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:09.409277   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:09.441481   68995 cri.go:89] found id: ""
	I0719 19:37:09.441509   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.441518   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:09.441526   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:09.441588   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:09.472907   68995 cri.go:89] found id: ""
	I0719 19:37:09.472935   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.472946   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:09.472954   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:09.473008   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:09.507776   68995 cri.go:89] found id: ""
	I0719 19:37:09.507801   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.507810   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:09.507818   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:09.507850   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:09.559873   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:09.559910   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:09.572650   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:09.572677   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:09.644352   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:09.644375   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:09.644387   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:09.721464   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:09.721504   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:09.252776   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:11.751354   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:08.983314   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:10.983634   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:12.732203   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:15.234561   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:12.264113   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:12.276702   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:12.276758   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:12.321359   68995 cri.go:89] found id: ""
	I0719 19:37:12.321386   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.321393   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:12.321400   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:12.321471   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:12.365402   68995 cri.go:89] found id: ""
	I0719 19:37:12.365433   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.365441   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:12.365447   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:12.365505   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:12.409431   68995 cri.go:89] found id: ""
	I0719 19:37:12.409459   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.409471   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:12.409478   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:12.409526   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:12.447628   68995 cri.go:89] found id: ""
	I0719 19:37:12.447656   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.447665   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:12.447671   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:12.447720   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:12.481760   68995 cri.go:89] found id: ""
	I0719 19:37:12.481796   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.481806   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:12.481813   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:12.481875   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:12.513430   68995 cri.go:89] found id: ""
	I0719 19:37:12.513460   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.513471   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:12.513477   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:12.513525   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:12.546140   68995 cri.go:89] found id: ""
	I0719 19:37:12.546171   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.546182   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:12.546190   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:12.546258   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:12.578741   68995 cri.go:89] found id: ""
	I0719 19:37:12.578764   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.578774   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:12.578785   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:12.578800   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:12.647584   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:12.647610   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:12.647625   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:12.721989   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:12.722027   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:12.761141   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:12.761168   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:12.811283   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:12.811334   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:15.327447   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:15.339987   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:15.340053   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:15.373692   68995 cri.go:89] found id: ""
	I0719 19:37:15.373720   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.373729   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:15.373737   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:15.373794   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:15.405801   68995 cri.go:89] found id: ""
	I0719 19:37:15.405831   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.405842   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:15.405849   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:15.405915   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:15.439395   68995 cri.go:89] found id: ""
	I0719 19:37:15.439424   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.439436   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:15.439444   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:15.439513   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:15.475944   68995 cri.go:89] found id: ""
	I0719 19:37:15.475967   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.475975   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:15.475981   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:15.476029   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:15.509593   68995 cri.go:89] found id: ""
	I0719 19:37:15.509613   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.509627   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:15.509634   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:15.509695   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:15.548840   68995 cri.go:89] found id: ""
	I0719 19:37:15.548871   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.548881   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:15.548887   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:15.548948   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:15.581096   68995 cri.go:89] found id: ""
	I0719 19:37:15.581139   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.581155   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:15.581163   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:15.581223   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:15.617428   68995 cri.go:89] found id: ""
	I0719 19:37:15.617458   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.617469   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:15.617479   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:15.617495   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:15.688196   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:15.688223   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:15.688239   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:15.770179   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:15.770216   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:15.807899   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:15.807924   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:15.858152   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:15.858186   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:13.751672   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:16.251465   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:13.482766   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:15.485709   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:17.982675   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:17.731547   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:19.732093   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:18.370857   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:18.383710   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:18.383781   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:18.417000   68995 cri.go:89] found id: ""
	I0719 19:37:18.417032   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.417044   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:18.417051   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:18.417149   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:18.455852   68995 cri.go:89] found id: ""
	I0719 19:37:18.455884   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.455900   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:18.455907   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:18.455970   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:18.491458   68995 cri.go:89] found id: ""
	I0719 19:37:18.491490   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.491501   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:18.491508   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:18.491573   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:18.525125   68995 cri.go:89] found id: ""
	I0719 19:37:18.525151   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.525158   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:18.525163   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:18.525210   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:18.560527   68995 cri.go:89] found id: ""
	I0719 19:37:18.560555   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.560566   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:18.560574   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:18.560624   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:18.595328   68995 cri.go:89] found id: ""
	I0719 19:37:18.595365   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.595378   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:18.595388   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:18.595445   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:18.627669   68995 cri.go:89] found id: ""
	I0719 19:37:18.627697   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.627705   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:18.627711   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:18.627765   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:18.660140   68995 cri.go:89] found id: ""
	I0719 19:37:18.660168   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.660177   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:18.660191   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:18.660206   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:18.733599   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:18.733628   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:18.774893   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:18.774920   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:18.827635   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:18.827672   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:18.841519   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:18.841545   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:18.904744   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:21.405683   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:21.420145   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:21.420224   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:21.456434   68995 cri.go:89] found id: ""
	I0719 19:37:21.456465   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.456477   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:21.456485   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:21.456543   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:21.489839   68995 cri.go:89] found id: ""
	I0719 19:37:21.489867   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.489874   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:21.489893   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:21.489945   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:21.521492   68995 cri.go:89] found id: ""
	I0719 19:37:21.521523   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.521533   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:21.521540   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:21.521602   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:21.553974   68995 cri.go:89] found id: ""
	I0719 19:37:21.553998   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.554008   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:21.554015   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:21.554075   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:21.589000   68995 cri.go:89] found id: ""
	I0719 19:37:21.589030   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.589042   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:21.589050   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:21.589119   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:21.622617   68995 cri.go:89] found id: ""
	I0719 19:37:21.622640   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.622648   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:21.622653   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:21.622702   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:21.654149   68995 cri.go:89] found id: ""
	I0719 19:37:21.654171   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.654178   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:21.654183   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:21.654241   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:21.687587   68995 cri.go:89] found id: ""
	I0719 19:37:21.687616   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.687628   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:21.687640   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:21.687656   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:21.699883   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:21.699910   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:21.769804   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:21.769825   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:21.769839   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:21.857167   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:21.857205   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:21.896025   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:21.896050   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:18.251893   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:20.751093   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:19.989216   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:22.483346   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:22.231129   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:24.231620   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:26.232522   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:24.444695   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:24.457239   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:24.457310   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:24.491968   68995 cri.go:89] found id: ""
	I0719 19:37:24.491990   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.491999   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:24.492003   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:24.492049   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:24.526039   68995 cri.go:89] found id: ""
	I0719 19:37:24.526061   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.526069   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:24.526074   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:24.526120   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:24.560153   68995 cri.go:89] found id: ""
	I0719 19:37:24.560184   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.560194   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:24.560201   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:24.560254   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:24.594331   68995 cri.go:89] found id: ""
	I0719 19:37:24.594359   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.594369   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:24.594375   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:24.594442   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:24.632389   68995 cri.go:89] found id: ""
	I0719 19:37:24.632416   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.632425   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:24.632430   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:24.632481   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:24.664448   68995 cri.go:89] found id: ""
	I0719 19:37:24.664473   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.664481   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:24.664487   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:24.664547   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:24.699510   68995 cri.go:89] found id: ""
	I0719 19:37:24.699533   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.699542   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:24.699547   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:24.699599   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:24.732624   68995 cri.go:89] found id: ""
	I0719 19:37:24.732646   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.732655   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:24.732665   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:24.732688   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:24.800136   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:24.800161   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:24.800176   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:24.883652   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:24.883696   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:24.922027   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:24.922052   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:24.975191   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:24.975235   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:23.251310   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:25.251598   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:27.751985   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:24.484050   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:26.982359   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:28.237361   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:30.734572   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:27.489307   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:27.503441   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:27.503500   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:27.536099   68995 cri.go:89] found id: ""
	I0719 19:37:27.536126   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.536134   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:27.536140   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:27.536187   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:27.567032   68995 cri.go:89] found id: ""
	I0719 19:37:27.567060   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.567068   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:27.567073   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:27.567133   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:27.599396   68995 cri.go:89] found id: ""
	I0719 19:37:27.599425   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.599436   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:27.599444   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:27.599510   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:27.634718   68995 cri.go:89] found id: ""
	I0719 19:37:27.634746   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.634755   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:27.634760   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:27.634809   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:27.672731   68995 cri.go:89] found id: ""
	I0719 19:37:27.672766   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.672778   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:27.672787   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:27.672852   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:27.706705   68995 cri.go:89] found id: ""
	I0719 19:37:27.706734   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.706743   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:27.706751   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:27.706814   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:27.740520   68995 cri.go:89] found id: ""
	I0719 19:37:27.740548   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.740556   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:27.740561   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:27.740614   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:27.772629   68995 cri.go:89] found id: ""
	I0719 19:37:27.772653   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.772660   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:27.772668   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:27.772680   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:27.826404   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:27.826432   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:27.839299   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:27.839331   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:27.903888   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:27.903908   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:27.903920   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:27.981384   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:27.981418   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:30.519503   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:30.532781   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:30.532867   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:30.566867   68995 cri.go:89] found id: ""
	I0719 19:37:30.566900   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.566911   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:30.566918   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:30.566979   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:30.599130   68995 cri.go:89] found id: ""
	I0719 19:37:30.599160   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.599171   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:30.599178   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:30.599241   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:30.632636   68995 cri.go:89] found id: ""
	I0719 19:37:30.632672   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.632685   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:30.632694   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:30.632767   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:30.665793   68995 cri.go:89] found id: ""
	I0719 19:37:30.665822   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.665837   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:30.665845   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:30.665908   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:30.699002   68995 cri.go:89] found id: ""
	I0719 19:37:30.699027   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.699034   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:30.699039   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:30.699098   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:30.733654   68995 cri.go:89] found id: ""
	I0719 19:37:30.733680   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.733690   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:30.733698   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:30.733757   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:30.766219   68995 cri.go:89] found id: ""
	I0719 19:37:30.766250   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.766260   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:30.766269   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:30.766344   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:30.800884   68995 cri.go:89] found id: ""
	I0719 19:37:30.800912   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.800922   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:30.800931   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:30.800946   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:30.851079   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:30.851114   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:30.863829   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:30.863873   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:30.932194   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:30.932219   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:30.932231   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:31.012345   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:31.012377   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:30.250950   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:32.251737   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:28.982804   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:30.983656   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:32.736607   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:35.232724   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:33.549156   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:33.561253   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:33.561318   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:33.593443   68995 cri.go:89] found id: ""
	I0719 19:37:33.593469   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.593479   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:33.593487   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:33.593545   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:33.626350   68995 cri.go:89] found id: ""
	I0719 19:37:33.626371   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.626380   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:33.626385   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:33.626434   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:33.660751   68995 cri.go:89] found id: ""
	I0719 19:37:33.660789   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.660797   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:33.660803   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:33.660857   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:33.696342   68995 cri.go:89] found id: ""
	I0719 19:37:33.696380   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.696392   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:33.696399   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:33.696465   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:33.728815   68995 cri.go:89] found id: ""
	I0719 19:37:33.728841   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.728850   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:33.728860   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:33.728922   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:33.765055   68995 cri.go:89] found id: ""
	I0719 19:37:33.765088   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.765095   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:33.765101   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:33.765150   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:33.798600   68995 cri.go:89] found id: ""
	I0719 19:37:33.798627   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.798635   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:33.798640   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:33.798687   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:33.831037   68995 cri.go:89] found id: ""
	I0719 19:37:33.831065   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.831075   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:33.831085   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:33.831100   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:33.880454   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:33.880485   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:33.894966   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:33.894996   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:33.964170   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:33.964200   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:33.964215   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:34.046534   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:34.046565   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:36.587779   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:36.601141   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:36.601211   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:36.634499   68995 cri.go:89] found id: ""
	I0719 19:37:36.634527   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.634536   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:36.634542   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:36.634606   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:36.666890   68995 cri.go:89] found id: ""
	I0719 19:37:36.666917   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.666925   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:36.666931   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:36.666986   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:36.698970   68995 cri.go:89] found id: ""
	I0719 19:37:36.699001   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.699016   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:36.699023   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:36.699078   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:36.731334   68995 cri.go:89] found id: ""
	I0719 19:37:36.731368   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.731379   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:36.731386   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:36.731446   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:36.764358   68995 cri.go:89] found id: ""
	I0719 19:37:36.764382   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.764394   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:36.764400   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:36.764455   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:36.798707   68995 cri.go:89] found id: ""
	I0719 19:37:36.798734   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.798742   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:36.798747   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:36.798797   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:36.831226   68995 cri.go:89] found id: ""
	I0719 19:37:36.831255   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.831266   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:36.831273   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:36.831331   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:36.863480   68995 cri.go:89] found id: ""
	I0719 19:37:36.863506   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.863513   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:36.863522   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:36.863536   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:36.915869   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:36.915908   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:36.930750   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:36.930781   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 19:37:34.750463   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:36.751189   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:33.484028   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:35.984592   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:37.233215   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:39.732096   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	W0719 19:37:37.012674   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:37.012694   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:37.012708   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:37.091337   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:37.091375   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:39.630016   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:39.642979   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:39.643041   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:39.675984   68995 cri.go:89] found id: ""
	I0719 19:37:39.676013   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.676023   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:39.676031   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:39.676100   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:39.707035   68995 cri.go:89] found id: ""
	I0719 19:37:39.707061   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.707068   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:39.707073   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:39.707131   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:39.744471   68995 cri.go:89] found id: ""
	I0719 19:37:39.744496   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.744503   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:39.744508   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:39.744561   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:39.778135   68995 cri.go:89] found id: ""
	I0719 19:37:39.778162   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.778171   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:39.778179   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:39.778238   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:39.811690   68995 cri.go:89] found id: ""
	I0719 19:37:39.811717   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.811727   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:39.811735   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:39.811793   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:39.843697   68995 cri.go:89] found id: ""
	I0719 19:37:39.843728   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.843740   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:39.843751   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:39.843801   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:39.877406   68995 cri.go:89] found id: ""
	I0719 19:37:39.877436   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.877448   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:39.877455   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:39.877520   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:39.909759   68995 cri.go:89] found id: ""
	I0719 19:37:39.909783   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.909790   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:39.909799   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:39.909809   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:39.958364   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:39.958393   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:39.973472   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:39.973512   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:40.040299   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:40.040323   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:40.040338   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:40.117663   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:40.117702   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:38.751436   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:40.751476   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:42.752002   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:38.483729   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:40.983938   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:42.232962   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:44.732151   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:42.657713   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:42.672030   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:42.672096   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:42.706163   68995 cri.go:89] found id: ""
	I0719 19:37:42.706193   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.706204   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:42.706212   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:42.706271   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:42.740026   68995 cri.go:89] found id: ""
	I0719 19:37:42.740053   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.740063   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:42.740070   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:42.740132   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:42.772413   68995 cri.go:89] found id: ""
	I0719 19:37:42.772436   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.772443   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:42.772448   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:42.772493   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:42.803898   68995 cri.go:89] found id: ""
	I0719 19:37:42.803925   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.803936   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:42.803943   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:42.804007   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:42.835062   68995 cri.go:89] found id: ""
	I0719 19:37:42.835091   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.835100   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:42.835108   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:42.835181   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:42.867382   68995 cri.go:89] found id: ""
	I0719 19:37:42.867411   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.867420   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:42.867427   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:42.867513   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:42.899461   68995 cri.go:89] found id: ""
	I0719 19:37:42.899483   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.899491   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:42.899496   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:42.899542   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:42.932789   68995 cri.go:89] found id: ""
	I0719 19:37:42.932818   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.932827   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:42.932837   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:42.932854   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:42.945144   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:42.945174   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:43.013369   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:43.013393   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:43.013408   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:43.093227   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:43.093263   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:43.134429   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:43.134454   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:45.684772   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:45.697903   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:45.697958   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:45.731984   68995 cri.go:89] found id: ""
	I0719 19:37:45.732007   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.732015   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:45.732021   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:45.732080   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:45.767631   68995 cri.go:89] found id: ""
	I0719 19:37:45.767660   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.767670   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:45.767678   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:45.767739   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:45.800822   68995 cri.go:89] found id: ""
	I0719 19:37:45.800849   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.800860   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:45.800871   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:45.800928   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:45.834313   68995 cri.go:89] found id: ""
	I0719 19:37:45.834345   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.834354   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:45.834360   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:45.834417   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:45.868659   68995 cri.go:89] found id: ""
	I0719 19:37:45.868690   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.868701   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:45.868709   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:45.868780   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:45.900779   68995 cri.go:89] found id: ""
	I0719 19:37:45.900806   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.900816   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:45.900824   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:45.900886   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:45.932767   68995 cri.go:89] found id: ""
	I0719 19:37:45.932796   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.932806   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:45.932811   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:45.932872   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:45.967785   68995 cri.go:89] found id: ""
	I0719 19:37:45.967814   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.967822   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:45.967839   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:45.967852   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:46.034598   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:46.034623   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:46.034642   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:46.124237   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:46.124286   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:46.164874   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:46.164904   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:46.216679   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:46.216713   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:45.251369   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:47.251447   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:43.483160   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:45.983286   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:47.983452   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:46.732579   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:49.232449   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:51.232677   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:48.730444   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:48.743209   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:48.743283   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:48.780940   68995 cri.go:89] found id: ""
	I0719 19:37:48.780974   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.780982   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:48.780987   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:48.781037   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:48.814186   68995 cri.go:89] found id: ""
	I0719 19:37:48.814213   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.814221   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:48.814230   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:48.814310   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:48.845013   68995 cri.go:89] found id: ""
	I0719 19:37:48.845042   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.845056   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:48.845065   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:48.845139   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:48.878777   68995 cri.go:89] found id: ""
	I0719 19:37:48.878799   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.878807   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:48.878812   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:48.878858   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:48.908926   68995 cri.go:89] found id: ""
	I0719 19:37:48.908954   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.908965   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:48.908972   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:48.909032   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:48.942442   68995 cri.go:89] found id: ""
	I0719 19:37:48.942473   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.942483   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:48.942491   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:48.942554   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:48.974213   68995 cri.go:89] found id: ""
	I0719 19:37:48.974240   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.974249   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:48.974254   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:48.974308   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:49.010041   68995 cri.go:89] found id: ""
	I0719 19:37:49.010067   68995 logs.go:276] 0 containers: []
	W0719 19:37:49.010076   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:49.010085   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:49.010098   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:49.060387   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:49.060419   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:49.073748   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:49.073773   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:49.139397   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:49.139422   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:49.139439   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:49.216748   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:49.216792   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:51.759657   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:51.771457   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:51.771538   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:51.804841   68995 cri.go:89] found id: ""
	I0719 19:37:51.804870   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.804881   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:51.804886   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:51.804937   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:51.838235   68995 cri.go:89] found id: ""
	I0719 19:37:51.838264   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.838274   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:51.838282   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:51.838363   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:51.885664   68995 cri.go:89] found id: ""
	I0719 19:37:51.885689   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.885699   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:51.885706   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:51.885769   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:51.917883   68995 cri.go:89] found id: ""
	I0719 19:37:51.917910   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.917920   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:51.917927   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:51.917990   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:51.949856   68995 cri.go:89] found id: ""
	I0719 19:37:51.949889   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.949898   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:51.949905   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:51.949965   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:51.984016   68995 cri.go:89] found id: ""
	I0719 19:37:51.984060   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.984072   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:51.984081   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:51.984134   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:49.752557   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:52.251462   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:50.483240   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:52.484284   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:53.731744   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:55.733040   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:52.020853   68995 cri.go:89] found id: ""
	I0719 19:37:52.020883   68995 logs.go:276] 0 containers: []
	W0719 19:37:52.020895   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:52.020903   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:52.020958   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:52.054285   68995 cri.go:89] found id: ""
	I0719 19:37:52.054310   68995 logs.go:276] 0 containers: []
	W0719 19:37:52.054321   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:52.054329   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:52.054341   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:52.091083   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:52.091108   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:52.143946   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:52.143981   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:52.158573   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:52.158600   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:52.232029   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:52.232052   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:52.232065   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:54.811805   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:54.825343   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:54.825404   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:54.859915   68995 cri.go:89] found id: ""
	I0719 19:37:54.859943   68995 logs.go:276] 0 containers: []
	W0719 19:37:54.859953   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:54.859961   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:54.860022   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:54.892199   68995 cri.go:89] found id: ""
	I0719 19:37:54.892234   68995 logs.go:276] 0 containers: []
	W0719 19:37:54.892245   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:54.892252   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:54.892316   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:54.925714   68995 cri.go:89] found id: ""
	I0719 19:37:54.925746   68995 logs.go:276] 0 containers: []
	W0719 19:37:54.925757   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:54.925765   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:54.925833   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:54.958677   68995 cri.go:89] found id: ""
	I0719 19:37:54.958717   68995 logs.go:276] 0 containers: []
	W0719 19:37:54.958729   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:54.958736   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:54.958803   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:54.994945   68995 cri.go:89] found id: ""
	I0719 19:37:54.994976   68995 logs.go:276] 0 containers: []
	W0719 19:37:54.994986   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:54.994994   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:54.995073   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:55.028149   68995 cri.go:89] found id: ""
	I0719 19:37:55.028175   68995 logs.go:276] 0 containers: []
	W0719 19:37:55.028185   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:55.028194   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:55.028255   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:55.062698   68995 cri.go:89] found id: ""
	I0719 19:37:55.062731   68995 logs.go:276] 0 containers: []
	W0719 19:37:55.062744   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:55.062753   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:55.062812   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:55.095864   68995 cri.go:89] found id: ""
	I0719 19:37:55.095892   68995 logs.go:276] 0 containers: []
	W0719 19:37:55.095903   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:55.095913   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:55.095928   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:55.152201   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:55.152235   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:55.165503   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:55.165528   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:55.232405   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:55.232429   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:55.232444   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:55.315444   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:55.315496   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:54.752510   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:57.251379   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:54.984369   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:57.482226   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:58.231419   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:00.733424   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:57.852984   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:57.865718   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:57.865784   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:57.898531   68995 cri.go:89] found id: ""
	I0719 19:37:57.898557   68995 logs.go:276] 0 containers: []
	W0719 19:37:57.898565   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:57.898570   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:57.898625   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:57.933746   68995 cri.go:89] found id: ""
	I0719 19:37:57.933774   68995 logs.go:276] 0 containers: []
	W0719 19:37:57.933784   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:57.933791   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:57.933851   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:57.974429   68995 cri.go:89] found id: ""
	I0719 19:37:57.974466   68995 logs.go:276] 0 containers: []
	W0719 19:37:57.974479   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:57.974488   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:57.974560   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:58.011209   68995 cri.go:89] found id: ""
	I0719 19:37:58.011233   68995 logs.go:276] 0 containers: []
	W0719 19:37:58.011243   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:58.011249   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:58.011313   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:58.049309   68995 cri.go:89] found id: ""
	I0719 19:37:58.049337   68995 logs.go:276] 0 containers: []
	W0719 19:37:58.049349   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:58.049357   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:58.049419   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:58.080873   68995 cri.go:89] found id: ""
	I0719 19:37:58.080902   68995 logs.go:276] 0 containers: []
	W0719 19:37:58.080913   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:58.080920   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:58.080979   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:58.113661   68995 cri.go:89] found id: ""
	I0719 19:37:58.113689   68995 logs.go:276] 0 containers: []
	W0719 19:37:58.113698   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:58.113706   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:58.113845   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:58.147431   68995 cri.go:89] found id: ""
	I0719 19:37:58.147459   68995 logs.go:276] 0 containers: []
	W0719 19:37:58.147469   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:58.147480   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:58.147495   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:58.160170   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:58.160195   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:58.222818   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:58.222858   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:58.222874   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:58.304895   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:58.304929   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:58.354462   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:58.354492   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:00.905865   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:00.918105   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:00.918180   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:00.952085   68995 cri.go:89] found id: ""
	I0719 19:38:00.952111   68995 logs.go:276] 0 containers: []
	W0719 19:38:00.952122   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:00.952130   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:00.952192   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:00.985966   68995 cri.go:89] found id: ""
	I0719 19:38:00.985990   68995 logs.go:276] 0 containers: []
	W0719 19:38:00.985999   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:00.986011   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:00.986059   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:01.021869   68995 cri.go:89] found id: ""
	I0719 19:38:01.021901   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.021910   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:01.021915   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:01.021961   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:01.055821   68995 cri.go:89] found id: ""
	I0719 19:38:01.055867   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.055878   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:01.055886   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:01.055950   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:01.090922   68995 cri.go:89] found id: ""
	I0719 19:38:01.090945   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.090952   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:01.090958   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:01.091007   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:01.122779   68995 cri.go:89] found id: ""
	I0719 19:38:01.122808   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.122824   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:01.122831   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:01.122896   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:01.159436   68995 cri.go:89] found id: ""
	I0719 19:38:01.159464   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.159473   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:01.159479   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:01.159531   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:01.191500   68995 cri.go:89] found id: ""
	I0719 19:38:01.191524   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.191532   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:01.191541   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:01.191551   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:01.227143   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:01.227173   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:01.278336   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:01.278363   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:01.291658   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:01.291687   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:01.357320   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:01.357346   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:01.357362   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:59.251599   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:01.252565   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:59.482653   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:01.483865   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:03.232078   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:05.731641   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:03.938679   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:03.951235   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:03.951295   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:03.987844   68995 cri.go:89] found id: ""
	I0719 19:38:03.987875   68995 logs.go:276] 0 containers: []
	W0719 19:38:03.987887   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:03.987894   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:03.987959   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:04.030628   68995 cri.go:89] found id: ""
	I0719 19:38:04.030654   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.030665   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:04.030672   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:04.030744   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:04.077913   68995 cri.go:89] found id: ""
	I0719 19:38:04.077942   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.077950   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:04.077955   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:04.078022   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:04.130065   68995 cri.go:89] found id: ""
	I0719 19:38:04.130098   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.130109   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:04.130116   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:04.130181   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:04.162061   68995 cri.go:89] found id: ""
	I0719 19:38:04.162089   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.162097   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:04.162103   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:04.162163   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:04.194526   68995 cri.go:89] found id: ""
	I0719 19:38:04.194553   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.194560   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:04.194566   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:04.194613   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:04.226387   68995 cri.go:89] found id: ""
	I0719 19:38:04.226418   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.226429   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:04.226436   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:04.226569   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:04.263346   68995 cri.go:89] found id: ""
	I0719 19:38:04.263375   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.263393   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:04.263404   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:04.263420   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:04.321110   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:04.321140   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:04.334294   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:04.334318   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:04.398556   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:04.398577   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:04.398592   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:04.485108   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:04.485139   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:03.751057   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:05.751335   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:03.983215   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:06.482374   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:08.231511   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:10.232368   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:07.022235   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:07.034595   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:07.034658   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:07.067454   68995 cri.go:89] found id: ""
	I0719 19:38:07.067482   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.067492   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:07.067499   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:07.067556   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:07.101154   68995 cri.go:89] found id: ""
	I0719 19:38:07.101183   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.101192   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:07.101206   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:07.101269   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:07.136084   68995 cri.go:89] found id: ""
	I0719 19:38:07.136110   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.136118   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:07.136123   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:07.136173   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:07.173278   68995 cri.go:89] found id: ""
	I0719 19:38:07.173304   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.173312   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:07.173317   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:07.173380   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:07.209680   68995 cri.go:89] found id: ""
	I0719 19:38:07.209707   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.209719   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:07.209727   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:07.209788   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:07.242425   68995 cri.go:89] found id: ""
	I0719 19:38:07.242453   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.242462   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:07.242471   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:07.242538   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:07.276112   68995 cri.go:89] found id: ""
	I0719 19:38:07.276137   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.276147   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:07.276153   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:07.276216   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:07.309460   68995 cri.go:89] found id: ""
	I0719 19:38:07.309489   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.309500   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:07.309514   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:07.309528   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:07.361611   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:07.361650   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:07.375905   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:07.375942   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:07.441390   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:07.441412   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:07.441426   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:07.522023   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:07.522068   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:10.061774   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:10.074620   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:10.074705   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:10.107260   68995 cri.go:89] found id: ""
	I0719 19:38:10.107284   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.107293   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:10.107298   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:10.107345   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:10.142704   68995 cri.go:89] found id: ""
	I0719 19:38:10.142731   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.142738   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:10.142744   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:10.142793   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:10.174713   68995 cri.go:89] found id: ""
	I0719 19:38:10.174751   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.174762   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:10.174770   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:10.174837   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:10.205687   68995 cri.go:89] found id: ""
	I0719 19:38:10.205711   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.205719   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:10.205724   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:10.205782   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:10.238754   68995 cri.go:89] found id: ""
	I0719 19:38:10.238781   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.238789   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:10.238795   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:10.238844   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:10.271239   68995 cri.go:89] found id: ""
	I0719 19:38:10.271264   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.271274   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:10.271281   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:10.271380   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:10.306944   68995 cri.go:89] found id: ""
	I0719 19:38:10.306968   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.306975   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:10.306980   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:10.307030   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:10.336681   68995 cri.go:89] found id: ""
	I0719 19:38:10.336708   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.336716   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:10.336725   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:10.336736   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:10.376368   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:10.376405   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:10.432676   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:10.432722   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:10.446932   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:10.446965   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:10.514285   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:10.514307   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:10.514335   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:08.251480   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:10.252276   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:12.751674   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:08.482753   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:10.483719   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:12.982523   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:12.732172   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:15.232040   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:13.097845   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:13.110781   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:13.110862   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:13.144732   68995 cri.go:89] found id: ""
	I0719 19:38:13.144760   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.144772   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:13.144779   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:13.144840   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:13.176313   68995 cri.go:89] found id: ""
	I0719 19:38:13.176356   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.176364   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:13.176372   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:13.176437   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:13.208219   68995 cri.go:89] found id: ""
	I0719 19:38:13.208249   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.208260   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:13.208267   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:13.208324   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:13.242977   68995 cri.go:89] found id: ""
	I0719 19:38:13.243004   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.243014   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:13.243022   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:13.243080   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:13.275356   68995 cri.go:89] found id: ""
	I0719 19:38:13.275382   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.275389   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:13.275396   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:13.275443   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:13.310244   68995 cri.go:89] found id: ""
	I0719 19:38:13.310275   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.310283   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:13.310290   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:13.310356   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:13.342626   68995 cri.go:89] found id: ""
	I0719 19:38:13.342652   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.342660   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:13.342667   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:13.342737   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:13.374943   68995 cri.go:89] found id: ""
	I0719 19:38:13.374965   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.374973   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:13.374983   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:13.374993   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:13.429940   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:13.429975   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:13.443636   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:13.443677   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:13.514439   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:13.514468   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:13.514481   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:13.595337   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:13.595371   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:16.132302   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:16.145088   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:16.145148   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:16.176994   68995 cri.go:89] found id: ""
	I0719 19:38:16.177023   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.177034   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:16.177042   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:16.177103   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:16.208730   68995 cri.go:89] found id: ""
	I0719 19:38:16.208760   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.208771   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:16.208779   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:16.208843   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:16.244708   68995 cri.go:89] found id: ""
	I0719 19:38:16.244736   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.244747   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:16.244755   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:16.244816   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:16.279211   68995 cri.go:89] found id: ""
	I0719 19:38:16.279237   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.279245   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:16.279250   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:16.279328   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:16.312856   68995 cri.go:89] found id: ""
	I0719 19:38:16.312891   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.312902   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:16.312912   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:16.312980   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:16.347598   68995 cri.go:89] found id: ""
	I0719 19:38:16.347623   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.347630   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:16.347635   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:16.347695   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:16.381927   68995 cri.go:89] found id: ""
	I0719 19:38:16.381957   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.381966   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:16.381973   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:16.382035   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:16.415020   68995 cri.go:89] found id: ""
	I0719 19:38:16.415053   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.415061   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:16.415071   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:16.415084   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:16.428268   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:16.428304   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:16.498869   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:16.498887   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:16.498902   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:16.581237   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:16.581271   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:16.616727   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:16.616755   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:15.251690   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:17.752442   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:15.483131   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:17.484337   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:17.232857   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:19.233157   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:19.172304   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:19.185206   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:19.185282   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:19.218156   68995 cri.go:89] found id: ""
	I0719 19:38:19.218189   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.218200   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:19.218207   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:19.218277   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:19.251173   68995 cri.go:89] found id: ""
	I0719 19:38:19.251201   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.251211   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:19.251219   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:19.251280   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:19.283537   68995 cri.go:89] found id: ""
	I0719 19:38:19.283566   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.283578   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:19.283586   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:19.283654   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:19.315995   68995 cri.go:89] found id: ""
	I0719 19:38:19.316026   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.316047   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:19.316054   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:19.316130   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:19.348917   68995 cri.go:89] found id: ""
	I0719 19:38:19.348940   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.348949   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:19.348954   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:19.349001   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:19.382174   68995 cri.go:89] found id: ""
	I0719 19:38:19.382206   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.382220   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:19.382228   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:19.382290   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:19.416178   68995 cri.go:89] found id: ""
	I0719 19:38:19.416208   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.416217   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:19.416222   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:19.416268   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:19.449333   68995 cri.go:89] found id: ""
	I0719 19:38:19.449364   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.449375   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:19.449385   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:19.449397   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:19.503673   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:19.503708   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:19.517815   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:19.517847   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:19.583811   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:19.583849   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:19.583865   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:19.658742   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:19.658780   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:20.252451   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:22.752922   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:19.982854   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:21.984114   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:21.732342   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:24.231957   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:26.232440   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:22.197792   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:22.211464   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:22.211526   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:22.248860   68995 cri.go:89] found id: ""
	I0719 19:38:22.248891   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.248899   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:22.248905   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:22.248967   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:22.293173   68995 cri.go:89] found id: ""
	I0719 19:38:22.293198   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.293206   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:22.293212   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:22.293268   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:22.328826   68995 cri.go:89] found id: ""
	I0719 19:38:22.328857   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.328868   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:22.328875   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:22.328939   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:22.361560   68995 cri.go:89] found id: ""
	I0719 19:38:22.361590   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.361601   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:22.361608   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:22.361668   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:22.400899   68995 cri.go:89] found id: ""
	I0719 19:38:22.400927   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.400935   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:22.400941   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:22.400988   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:22.437159   68995 cri.go:89] found id: ""
	I0719 19:38:22.437185   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.437195   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:22.437202   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:22.437264   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:22.472424   68995 cri.go:89] found id: ""
	I0719 19:38:22.472454   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.472464   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:22.472472   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:22.472532   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:22.507649   68995 cri.go:89] found id: ""
	I0719 19:38:22.507678   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.507688   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:22.507704   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:22.507718   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:22.559616   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:22.559649   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:22.573436   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:22.573468   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:22.643245   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:22.643279   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:22.643319   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:22.721532   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:22.721571   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:25.264435   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:25.277324   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:25.277400   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:25.308328   68995 cri.go:89] found id: ""
	I0719 19:38:25.308359   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.308370   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:25.308377   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:25.308429   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:25.344100   68995 cri.go:89] found id: ""
	I0719 19:38:25.344130   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.344140   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:25.344148   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:25.344207   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:25.379393   68995 cri.go:89] found id: ""
	I0719 19:38:25.379430   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.379440   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:25.379447   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:25.379495   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:25.417344   68995 cri.go:89] found id: ""
	I0719 19:38:25.417369   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.417377   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:25.417383   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:25.417467   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:25.450753   68995 cri.go:89] found id: ""
	I0719 19:38:25.450778   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.450787   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:25.450793   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:25.450845   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:25.483479   68995 cri.go:89] found id: ""
	I0719 19:38:25.483503   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.483512   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:25.483520   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:25.483585   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:25.514954   68995 cri.go:89] found id: ""
	I0719 19:38:25.514978   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.514986   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:25.514992   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:25.515040   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:25.547802   68995 cri.go:89] found id: ""
	I0719 19:38:25.547828   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.547847   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:25.547858   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:25.547872   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:25.599643   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:25.599679   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:25.612587   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:25.612613   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:25.680267   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:25.680283   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:25.680294   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:25.764185   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:25.764215   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:25.253957   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:27.751614   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:24.482926   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:26.982157   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:28.731943   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:30.732225   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:28.309615   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:28.322513   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:28.322590   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:28.354692   68995 cri.go:89] found id: ""
	I0719 19:38:28.354728   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.354738   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:28.354746   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:28.354819   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:28.392144   68995 cri.go:89] found id: ""
	I0719 19:38:28.392170   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.392180   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:28.392187   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:28.392245   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:28.428196   68995 cri.go:89] found id: ""
	I0719 19:38:28.428226   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.428235   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:28.428243   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:28.428303   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:28.460353   68995 cri.go:89] found id: ""
	I0719 19:38:28.460385   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.460396   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:28.460403   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:28.460465   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:28.493143   68995 cri.go:89] found id: ""
	I0719 19:38:28.493168   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.493177   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:28.493184   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:28.493240   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:28.527379   68995 cri.go:89] found id: ""
	I0719 19:38:28.527404   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.527414   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:28.527422   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:28.527477   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:28.561506   68995 cri.go:89] found id: ""
	I0719 19:38:28.561531   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.561547   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:28.561555   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:28.561615   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:28.593832   68995 cri.go:89] found id: ""
	I0719 19:38:28.593859   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.593868   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:28.593878   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:28.593894   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:28.606980   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:28.607011   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:28.680957   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:28.680984   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:28.680997   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:28.755705   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:28.755738   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:28.793362   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:28.793397   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:31.343185   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:31.355903   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:31.355980   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:31.390158   68995 cri.go:89] found id: ""
	I0719 19:38:31.390188   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.390198   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:31.390203   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:31.390256   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:31.422743   68995 cri.go:89] found id: ""
	I0719 19:38:31.422777   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.422788   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:31.422796   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:31.422858   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:31.457886   68995 cri.go:89] found id: ""
	I0719 19:38:31.457912   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.457922   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:31.457927   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:31.457987   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:31.493625   68995 cri.go:89] found id: ""
	I0719 19:38:31.493648   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.493655   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:31.493670   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:31.493724   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:31.525764   68995 cri.go:89] found id: ""
	I0719 19:38:31.525789   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.525797   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:31.525803   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:31.525857   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:31.558453   68995 cri.go:89] found id: ""
	I0719 19:38:31.558478   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.558489   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:31.558499   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:31.558565   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:31.597875   68995 cri.go:89] found id: ""
	I0719 19:38:31.597901   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.597911   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:31.597918   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:31.597979   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:31.631688   68995 cri.go:89] found id: ""
	I0719 19:38:31.631712   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.631722   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:31.631730   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:31.631744   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:31.696991   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:31.697017   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:31.697032   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:31.782668   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:31.782712   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:31.822052   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:31.822091   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:31.876818   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:31.876857   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:30.250702   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:32.251104   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:29.482806   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:31.483225   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:33.232825   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:35.731586   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:34.390680   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:34.403662   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:34.403718   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:34.437627   68995 cri.go:89] found id: ""
	I0719 19:38:34.437653   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.437661   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:34.437668   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:34.437722   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:34.470906   68995 cri.go:89] found id: ""
	I0719 19:38:34.470935   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.470947   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:34.470955   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:34.471019   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:34.504424   68995 cri.go:89] found id: ""
	I0719 19:38:34.504452   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.504464   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:34.504472   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:34.504530   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:34.538610   68995 cri.go:89] found id: ""
	I0719 19:38:34.538638   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.538649   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:34.538657   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:34.538728   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:34.571473   68995 cri.go:89] found id: ""
	I0719 19:38:34.571500   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.571511   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:34.571518   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:34.571592   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:34.608465   68995 cri.go:89] found id: ""
	I0719 19:38:34.608499   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.608509   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:34.608516   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:34.608581   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:34.650110   68995 cri.go:89] found id: ""
	I0719 19:38:34.650140   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.650151   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:34.650158   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:34.650220   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:34.683386   68995 cri.go:89] found id: ""
	I0719 19:38:34.683409   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.683416   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:34.683427   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:34.683443   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:34.736264   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:34.736294   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:34.751293   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:34.751338   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:34.818817   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:34.818838   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:34.818854   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:34.899383   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:34.899419   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:34.252210   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:36.754343   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:33.984014   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:35.984811   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:37.732623   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:40.231802   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:37.438455   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:37.451494   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:37.451563   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:37.485908   68995 cri.go:89] found id: ""
	I0719 19:38:37.485934   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.485943   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:37.485950   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:37.486006   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:37.519156   68995 cri.go:89] found id: ""
	I0719 19:38:37.519183   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.519193   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:37.519200   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:37.519257   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:37.561498   68995 cri.go:89] found id: ""
	I0719 19:38:37.561530   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.561541   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:37.561548   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:37.561608   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:37.598971   68995 cri.go:89] found id: ""
	I0719 19:38:37.599002   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.599012   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:37.599019   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:37.599075   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:37.641257   68995 cri.go:89] found id: ""
	I0719 19:38:37.641304   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.641315   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:37.641323   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:37.641376   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:37.676572   68995 cri.go:89] found id: ""
	I0719 19:38:37.676596   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.676605   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:37.676610   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:37.676671   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:37.712105   68995 cri.go:89] found id: ""
	I0719 19:38:37.712131   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.712139   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:37.712145   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:37.712201   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:37.747850   68995 cri.go:89] found id: ""
	I0719 19:38:37.747879   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.747889   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:37.747899   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:37.747912   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:37.800904   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:37.800939   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:37.814264   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:37.814293   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:37.891872   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:37.891897   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:37.891913   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:37.967997   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:37.968033   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:40.511969   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:40.525191   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:40.525259   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:40.559619   68995 cri.go:89] found id: ""
	I0719 19:38:40.559651   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.559663   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:40.559671   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:40.559730   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:40.592775   68995 cri.go:89] found id: ""
	I0719 19:38:40.592804   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.592812   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:40.592817   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:40.592873   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:40.625524   68995 cri.go:89] found id: ""
	I0719 19:38:40.625546   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.625553   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:40.625558   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:40.625624   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:40.662117   68995 cri.go:89] found id: ""
	I0719 19:38:40.662151   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.662163   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:40.662171   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:40.662231   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:40.694835   68995 cri.go:89] found id: ""
	I0719 19:38:40.694868   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.694877   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:40.694885   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:40.694944   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:40.726879   68995 cri.go:89] found id: ""
	I0719 19:38:40.726905   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.726913   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:40.726919   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:40.726975   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:40.764663   68995 cri.go:89] found id: ""
	I0719 19:38:40.764688   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.764700   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:40.764707   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:40.764752   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:40.796684   68995 cri.go:89] found id: ""
	I0719 19:38:40.796711   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.796718   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:40.796726   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:40.796736   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:40.834549   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:40.834588   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:40.886521   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:40.886561   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:40.900162   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:40.900199   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:40.969263   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:40.969285   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:40.969319   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:39.250513   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:41.251724   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:38.482887   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:40.484413   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:42.982430   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:42.232190   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:44.232617   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:43.545375   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:43.559091   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:43.559174   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:43.591901   68995 cri.go:89] found id: ""
	I0719 19:38:43.591929   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.591939   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:43.591946   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:43.592009   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:43.656005   68995 cri.go:89] found id: ""
	I0719 19:38:43.656035   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.656046   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:43.656055   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:43.656126   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:43.690049   68995 cri.go:89] found id: ""
	I0719 19:38:43.690073   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.690085   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:43.690092   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:43.690150   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:43.723449   68995 cri.go:89] found id: ""
	I0719 19:38:43.723475   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.723483   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:43.723490   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:43.723565   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:43.759683   68995 cri.go:89] found id: ""
	I0719 19:38:43.759710   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.759718   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:43.759724   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:43.759781   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:43.795127   68995 cri.go:89] found id: ""
	I0719 19:38:43.795154   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.795162   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:43.795179   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:43.795225   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:43.832493   68995 cri.go:89] found id: ""
	I0719 19:38:43.832522   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.832530   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:43.832536   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:43.832588   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:43.868651   68995 cri.go:89] found id: ""
	I0719 19:38:43.868686   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.868697   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:43.868709   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:43.868724   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:43.908054   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:43.908096   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:43.960508   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:43.960546   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:43.973793   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:43.973817   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:44.046185   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:44.046210   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:44.046224   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:46.628410   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:46.640786   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:46.640853   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:46.672921   68995 cri.go:89] found id: ""
	I0719 19:38:46.672950   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.672960   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:46.672967   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:46.673030   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:46.706817   68995 cri.go:89] found id: ""
	I0719 19:38:46.706846   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.706857   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:46.706864   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:46.706922   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:46.741130   68995 cri.go:89] found id: ""
	I0719 19:38:46.741151   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.741158   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:46.741163   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:46.741207   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:46.778639   68995 cri.go:89] found id: ""
	I0719 19:38:46.778664   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.778671   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:46.778677   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:46.778724   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:46.812605   68995 cri.go:89] found id: ""
	I0719 19:38:46.812632   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.812648   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:46.812656   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:46.812709   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:46.848459   68995 cri.go:89] found id: ""
	I0719 19:38:46.848490   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.848499   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:46.848506   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:46.848554   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:46.883444   68995 cri.go:89] found id: ""
	I0719 19:38:46.883475   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.883486   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:46.883494   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:46.883553   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:46.916932   68995 cri.go:89] found id: ""
	I0719 19:38:46.916959   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.916974   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:46.916983   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:46.916994   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:46.967243   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:46.967273   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:46.980786   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:46.980814   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 19:38:43.253266   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:45.752678   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:45.482055   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:47.482299   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:46.733205   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:49.232365   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	W0719 19:38:47.050672   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:47.050698   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:47.050713   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:47.136521   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:47.136557   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:49.673673   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:49.687071   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:49.687150   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:49.720268   68995 cri.go:89] found id: ""
	I0719 19:38:49.720296   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.720305   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:49.720313   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:49.720375   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:49.754485   68995 cri.go:89] found id: ""
	I0719 19:38:49.754512   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.754520   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:49.754527   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:49.754595   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:49.786533   68995 cri.go:89] found id: ""
	I0719 19:38:49.786557   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.786566   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:49.786573   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:49.786637   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:49.819719   68995 cri.go:89] found id: ""
	I0719 19:38:49.819749   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.819758   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:49.819764   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:49.819829   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:49.856826   68995 cri.go:89] found id: ""
	I0719 19:38:49.856859   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.856869   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:49.856876   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:49.856942   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:49.891737   68995 cri.go:89] found id: ""
	I0719 19:38:49.891763   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.891774   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:49.891781   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:49.891860   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:49.924660   68995 cri.go:89] found id: ""
	I0719 19:38:49.924690   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.924699   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:49.924704   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:49.924761   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:49.956896   68995 cri.go:89] found id: ""
	I0719 19:38:49.956925   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.956936   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:49.956946   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:49.956959   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:50.012836   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:50.012867   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:50.028269   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:50.028292   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:50.128352   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:50.128380   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:50.128392   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:50.205971   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:50.206004   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:48.252120   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:50.751581   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:49.482679   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:51.984263   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:51.732882   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:54.232300   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:56.235372   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:52.742970   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:52.756222   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:52.756294   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:52.789556   68995 cri.go:89] found id: ""
	I0719 19:38:52.789585   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.789595   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:52.789601   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:52.789660   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:52.820745   68995 cri.go:89] found id: ""
	I0719 19:38:52.820790   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.820803   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:52.820811   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:52.820878   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:52.854667   68995 cri.go:89] found id: ""
	I0719 19:38:52.854698   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.854709   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:52.854716   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:52.854775   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:52.887125   68995 cri.go:89] found id: ""
	I0719 19:38:52.887152   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.887162   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:52.887167   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:52.887215   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:52.920824   68995 cri.go:89] found id: ""
	I0719 19:38:52.920861   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.920872   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:52.920878   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:52.920941   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:52.952819   68995 cri.go:89] found id: ""
	I0719 19:38:52.952848   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.952856   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:52.952861   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:52.952915   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:52.985508   68995 cri.go:89] found id: ""
	I0719 19:38:52.985535   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.985545   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:52.985553   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:52.985617   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:53.018633   68995 cri.go:89] found id: ""
	I0719 19:38:53.018660   68995 logs.go:276] 0 containers: []
	W0719 19:38:53.018669   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:53.018678   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:53.018690   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:53.071496   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:53.071533   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:53.084685   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:53.084709   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:53.146579   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:53.146603   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:53.146617   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:53.227521   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:53.227552   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:55.768276   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:55.782991   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:55.783075   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:55.825184   68995 cri.go:89] found id: ""
	I0719 19:38:55.825206   68995 logs.go:276] 0 containers: []
	W0719 19:38:55.825214   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:55.825220   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:55.825282   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:55.872997   68995 cri.go:89] found id: ""
	I0719 19:38:55.873024   68995 logs.go:276] 0 containers: []
	W0719 19:38:55.873033   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:55.873041   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:55.873106   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:55.911478   68995 cri.go:89] found id: ""
	I0719 19:38:55.911510   68995 logs.go:276] 0 containers: []
	W0719 19:38:55.911521   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:55.911530   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:55.911654   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:55.943930   68995 cri.go:89] found id: ""
	I0719 19:38:55.943960   68995 logs.go:276] 0 containers: []
	W0719 19:38:55.943971   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:55.943978   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:55.944046   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:55.977743   68995 cri.go:89] found id: ""
	I0719 19:38:55.977771   68995 logs.go:276] 0 containers: []
	W0719 19:38:55.977781   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:55.977788   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:55.977855   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:56.014436   68995 cri.go:89] found id: ""
	I0719 19:38:56.014467   68995 logs.go:276] 0 containers: []
	W0719 19:38:56.014479   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:56.014486   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:56.014561   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:56.047209   68995 cri.go:89] found id: ""
	I0719 19:38:56.047244   68995 logs.go:276] 0 containers: []
	W0719 19:38:56.047255   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:56.047263   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:56.047348   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:56.079632   68995 cri.go:89] found id: ""
	I0719 19:38:56.079655   68995 logs.go:276] 0 containers: []
	W0719 19:38:56.079663   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:56.079671   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:56.079685   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:56.153319   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:56.153343   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:56.153356   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:56.248187   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:56.248228   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:56.291329   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:56.291354   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:56.346462   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:56.346499   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:53.251881   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:55.751678   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:54.484732   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:56.982643   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:58.733025   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:00.731479   68617 pod_ready.go:81] duration metric: took 4m0.005795927s for pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace to be "Ready" ...
	E0719 19:39:00.731502   68617 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0719 19:39:00.731509   68617 pod_ready.go:38] duration metric: took 4m4.040942986s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:39:00.731523   68617 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:39:00.731547   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:39:00.731600   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:39:00.786510   68617 cri.go:89] found id: "82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5"
	I0719 19:39:00.786531   68617 cri.go:89] found id: ""
	I0719 19:39:00.786539   68617 logs.go:276] 1 containers: [82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5]
	I0719 19:39:00.786595   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:00.791234   68617 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:39:00.791294   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:39:00.826465   68617 cri.go:89] found id: "46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c"
	I0719 19:39:00.826492   68617 cri.go:89] found id: ""
	I0719 19:39:00.826503   68617 logs.go:276] 1 containers: [46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c]
	I0719 19:39:00.826562   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:00.830418   68617 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:39:00.830481   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:39:00.865831   68617 cri.go:89] found id: "9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb"
	I0719 19:39:00.865851   68617 cri.go:89] found id: ""
	I0719 19:39:00.865859   68617 logs.go:276] 1 containers: [9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb]
	I0719 19:39:00.865903   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:00.870162   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:39:00.870209   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:39:00.912742   68617 cri.go:89] found id: "560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e"
	I0719 19:39:00.912766   68617 cri.go:89] found id: ""
	I0719 19:39:00.912775   68617 logs.go:276] 1 containers: [560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e]
	I0719 19:39:00.912829   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:00.916725   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:39:00.916777   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:39:00.951824   68617 cri.go:89] found id: "4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e"
	I0719 19:39:00.951861   68617 cri.go:89] found id: ""
	I0719 19:39:00.951871   68617 logs.go:276] 1 containers: [4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e]
	I0719 19:39:00.951920   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:00.955637   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:39:00.955702   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:39:00.990611   68617 cri.go:89] found id: "4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498"
	I0719 19:39:00.990637   68617 cri.go:89] found id: ""
	I0719 19:39:00.990645   68617 logs.go:276] 1 containers: [4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498]
	I0719 19:39:00.990690   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:00.994506   68617 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:39:00.994568   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:39:01.030959   68617 cri.go:89] found id: ""
	I0719 19:39:01.030991   68617 logs.go:276] 0 containers: []
	W0719 19:39:01.031001   68617 logs.go:278] No container was found matching "kindnet"
	I0719 19:39:01.031007   68617 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 19:39:01.031069   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 19:39:01.068897   68617 cri.go:89] found id: "aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040"
	I0719 19:39:01.068928   68617 cri.go:89] found id: "f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91"
	I0719 19:39:01.068934   68617 cri.go:89] found id: ""
	I0719 19:39:01.068945   68617 logs.go:276] 2 containers: [aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040 f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91]
	I0719 19:39:01.069006   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:01.073087   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:01.076976   68617 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:39:01.076994   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:58.861739   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:58.874774   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:58.874842   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:58.913102   68995 cri.go:89] found id: ""
	I0719 19:38:58.913130   68995 logs.go:276] 0 containers: []
	W0719 19:38:58.913138   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:58.913143   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:58.913201   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:58.949367   68995 cri.go:89] found id: ""
	I0719 19:38:58.949397   68995 logs.go:276] 0 containers: []
	W0719 19:38:58.949407   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:58.949415   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:58.949483   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:58.984133   68995 cri.go:89] found id: ""
	I0719 19:38:58.984152   68995 logs.go:276] 0 containers: []
	W0719 19:38:58.984158   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:58.984164   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:58.984224   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:59.016356   68995 cri.go:89] found id: ""
	I0719 19:38:59.016383   68995 logs.go:276] 0 containers: []
	W0719 19:38:59.016393   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:59.016401   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:59.016469   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:59.049981   68995 cri.go:89] found id: ""
	I0719 19:38:59.050009   68995 logs.go:276] 0 containers: []
	W0719 19:38:59.050017   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:59.050027   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:59.050087   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:59.082248   68995 cri.go:89] found id: ""
	I0719 19:38:59.082270   68995 logs.go:276] 0 containers: []
	W0719 19:38:59.082278   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:59.082284   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:59.082358   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:59.115980   68995 cri.go:89] found id: ""
	I0719 19:38:59.116011   68995 logs.go:276] 0 containers: []
	W0719 19:38:59.116021   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:59.116028   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:59.116102   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:59.148379   68995 cri.go:89] found id: ""
	I0719 19:38:59.148404   68995 logs.go:276] 0 containers: []
	W0719 19:38:59.148412   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:59.148420   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:59.148432   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:59.161066   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:59.161099   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:59.225628   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:59.225658   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:59.225669   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:59.311640   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:59.311682   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:59.360964   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:59.360997   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:39:01.915091   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:39:01.928928   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:39:01.928987   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:39:01.962436   68995 cri.go:89] found id: ""
	I0719 19:39:01.962472   68995 logs.go:276] 0 containers: []
	W0719 19:39:01.962483   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:39:01.962490   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:39:01.962559   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:58.251636   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:00.752163   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:58.983433   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:01.482798   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:01.626674   68617 logs.go:123] Gathering logs for kube-apiserver [82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5] ...
	I0719 19:39:01.626731   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5"
	I0719 19:39:01.670756   68617 logs.go:123] Gathering logs for coredns [9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb] ...
	I0719 19:39:01.670787   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb"
	I0719 19:39:01.706541   68617 logs.go:123] Gathering logs for kube-controller-manager [4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498] ...
	I0719 19:39:01.706578   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498"
	I0719 19:39:01.756534   68617 logs.go:123] Gathering logs for etcd [46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c] ...
	I0719 19:39:01.756570   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c"
	I0719 19:39:01.805577   68617 logs.go:123] Gathering logs for kube-scheduler [560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e] ...
	I0719 19:39:01.805613   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e"
	I0719 19:39:01.839258   68617 logs.go:123] Gathering logs for kube-proxy [4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e] ...
	I0719 19:39:01.839288   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e"
	I0719 19:39:01.878371   68617 logs.go:123] Gathering logs for storage-provisioner [aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040] ...
	I0719 19:39:01.878404   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040"
	I0719 19:39:01.917151   68617 logs.go:123] Gathering logs for storage-provisioner [f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91] ...
	I0719 19:39:01.917183   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91"
	I0719 19:39:01.956111   68617 logs.go:123] Gathering logs for kubelet ...
	I0719 19:39:01.956149   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:39:02.023029   68617 logs.go:123] Gathering logs for dmesg ...
	I0719 19:39:02.023081   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:39:02.040889   68617 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:39:02.040920   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 19:39:02.188194   68617 logs.go:123] Gathering logs for container status ...
	I0719 19:39:02.188226   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:39:04.739149   68617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:39:04.757462   68617 api_server.go:72] duration metric: took 4m15.770324413s to wait for apiserver process to appear ...
	I0719 19:39:04.757482   68617 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:39:04.757512   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:39:04.757557   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:39:04.796517   68617 cri.go:89] found id: "82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5"
	I0719 19:39:04.796543   68617 cri.go:89] found id: ""
	I0719 19:39:04.796553   68617 logs.go:276] 1 containers: [82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5]
	I0719 19:39:04.796610   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:04.801976   68617 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:39:04.802046   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:39:04.842403   68617 cri.go:89] found id: "46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c"
	I0719 19:39:04.842425   68617 cri.go:89] found id: ""
	I0719 19:39:04.842434   68617 logs.go:276] 1 containers: [46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c]
	I0719 19:39:04.842487   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:04.846620   68617 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:39:04.846694   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:39:04.885318   68617 cri.go:89] found id: "9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb"
	I0719 19:39:04.885343   68617 cri.go:89] found id: ""
	I0719 19:39:04.885353   68617 logs.go:276] 1 containers: [9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb]
	I0719 19:39:04.885411   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:04.889813   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:39:04.889883   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:39:04.928174   68617 cri.go:89] found id: "560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e"
	I0719 19:39:04.928201   68617 cri.go:89] found id: ""
	I0719 19:39:04.928211   68617 logs.go:276] 1 containers: [560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e]
	I0719 19:39:04.928278   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:04.932724   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:39:04.932807   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:39:04.970800   68617 cri.go:89] found id: "4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e"
	I0719 19:39:04.970823   68617 cri.go:89] found id: ""
	I0719 19:39:04.970830   68617 logs.go:276] 1 containers: [4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e]
	I0719 19:39:04.970882   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:04.975294   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:39:04.975369   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:39:05.012666   68617 cri.go:89] found id: "4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498"
	I0719 19:39:05.012690   68617 cri.go:89] found id: ""
	I0719 19:39:05.012699   68617 logs.go:276] 1 containers: [4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498]
	I0719 19:39:05.012766   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:05.016841   68617 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:39:05.016914   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:39:05.061117   68617 cri.go:89] found id: ""
	I0719 19:39:05.061141   68617 logs.go:276] 0 containers: []
	W0719 19:39:05.061151   68617 logs.go:278] No container was found matching "kindnet"
	I0719 19:39:05.061157   68617 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 19:39:05.061216   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 19:39:05.102934   68617 cri.go:89] found id: "aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040"
	I0719 19:39:05.102959   68617 cri.go:89] found id: "f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91"
	I0719 19:39:05.102965   68617 cri.go:89] found id: ""
	I0719 19:39:05.102974   68617 logs.go:276] 2 containers: [aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040 f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91]
	I0719 19:39:05.103029   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:05.107192   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:05.110970   68617 logs.go:123] Gathering logs for kube-scheduler [560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e] ...
	I0719 19:39:05.110990   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e"
	I0719 19:39:05.152862   68617 logs.go:123] Gathering logs for storage-provisioner [aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040] ...
	I0719 19:39:05.152900   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040"
	I0719 19:39:05.194080   68617 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:39:05.194110   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:39:05.657623   68617 logs.go:123] Gathering logs for container status ...
	I0719 19:39:05.657659   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:39:05.703773   68617 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:39:05.703802   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 19:39:05.810811   68617 logs.go:123] Gathering logs for etcd [46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c] ...
	I0719 19:39:05.810842   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c"
	I0719 19:39:05.859793   68617 logs.go:123] Gathering logs for kube-apiserver [82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5] ...
	I0719 19:39:05.859850   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5"
	I0719 19:39:05.911578   68617 logs.go:123] Gathering logs for coredns [9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb] ...
	I0719 19:39:05.911606   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb"
	I0719 19:39:05.947358   68617 logs.go:123] Gathering logs for kube-proxy [4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e] ...
	I0719 19:39:05.947384   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e"
	I0719 19:39:05.990731   68617 logs.go:123] Gathering logs for kube-controller-manager [4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498] ...
	I0719 19:39:05.990761   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498"
	I0719 19:39:06.041223   68617 logs.go:123] Gathering logs for storage-provisioner [f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91] ...
	I0719 19:39:06.041253   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91"
	I0719 19:39:06.076037   68617 logs.go:123] Gathering logs for kubelet ...
	I0719 19:39:06.076068   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:39:06.131177   68617 logs.go:123] Gathering logs for dmesg ...
	I0719 19:39:06.131213   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:39:02.003045   68995 cri.go:89] found id: ""
	I0719 19:39:02.003078   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.003090   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:39:02.003098   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:39:02.003160   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:39:02.041835   68995 cri.go:89] found id: ""
	I0719 19:39:02.041863   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.041880   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:39:02.041888   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:39:02.041954   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:39:02.080285   68995 cri.go:89] found id: ""
	I0719 19:39:02.080340   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.080353   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:39:02.080361   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:39:02.080431   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:39:02.116225   68995 cri.go:89] found id: ""
	I0719 19:39:02.116308   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.116323   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:39:02.116331   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:39:02.116390   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:39:02.154818   68995 cri.go:89] found id: ""
	I0719 19:39:02.154848   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.154860   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:39:02.154868   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:39:02.154931   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:39:02.190689   68995 cri.go:89] found id: ""
	I0719 19:39:02.190715   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.190727   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:39:02.190741   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:39:02.190810   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:39:02.224155   68995 cri.go:89] found id: ""
	I0719 19:39:02.224185   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.224196   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:39:02.224207   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:39:02.224221   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:39:02.295623   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:39:02.295649   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:39:02.295665   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:39:02.378354   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:39:02.378390   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:39:02.416194   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:39:02.416222   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:39:02.470997   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:39:02.471039   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:39:04.985908   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:39:04.998485   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:39:04.998556   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:39:05.032776   68995 cri.go:89] found id: ""
	I0719 19:39:05.032801   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.032812   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:39:05.032819   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:39:05.032878   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:39:05.066650   68995 cri.go:89] found id: ""
	I0719 19:39:05.066677   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.066686   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:39:05.066693   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:39:05.066771   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:39:05.103689   68995 cri.go:89] found id: ""
	I0719 19:39:05.103712   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.103720   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:39:05.103728   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:39:05.103786   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:39:05.141739   68995 cri.go:89] found id: ""
	I0719 19:39:05.141767   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.141778   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:39:05.141785   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:39:05.141844   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:39:05.178535   68995 cri.go:89] found id: ""
	I0719 19:39:05.178566   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.178576   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:39:05.178582   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:39:05.178640   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:39:05.213876   68995 cri.go:89] found id: ""
	I0719 19:39:05.213904   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.213914   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:39:05.213921   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:39:05.213980   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:39:05.255039   68995 cri.go:89] found id: ""
	I0719 19:39:05.255063   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.255070   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:39:05.255076   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:39:05.255131   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:39:05.288508   68995 cri.go:89] found id: ""
	I0719 19:39:05.288536   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.288547   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:39:05.288558   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:39:05.288572   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:39:05.301079   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:39:05.301109   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:39:05.368109   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:39:05.368128   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:39:05.368140   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:39:05.453326   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:39:05.453360   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:39:05.494780   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:39:05.494814   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:39:03.253971   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:05.751964   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:03.983599   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:06.482412   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:08.646117   68617 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0719 19:39:08.651440   68617 api_server.go:279] https://192.168.61.107:8443/healthz returned 200:
	ok
	I0719 19:39:08.652517   68617 api_server.go:141] control plane version: v1.30.3
	I0719 19:39:08.652538   68617 api_server.go:131] duration metric: took 3.895047319s to wait for apiserver health ...
	I0719 19:39:08.652545   68617 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:39:08.652568   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:39:08.652629   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:39:08.688312   68617 cri.go:89] found id: "82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5"
	I0719 19:39:08.688356   68617 cri.go:89] found id: ""
	I0719 19:39:08.688366   68617 logs.go:276] 1 containers: [82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5]
	I0719 19:39:08.688424   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.692993   68617 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:39:08.693046   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:39:08.736900   68617 cri.go:89] found id: "46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c"
	I0719 19:39:08.736920   68617 cri.go:89] found id: ""
	I0719 19:39:08.736927   68617 logs.go:276] 1 containers: [46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c]
	I0719 19:39:08.736981   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.741409   68617 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:39:08.741473   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:39:08.780867   68617 cri.go:89] found id: "9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb"
	I0719 19:39:08.780885   68617 cri.go:89] found id: ""
	I0719 19:39:08.780892   68617 logs.go:276] 1 containers: [9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb]
	I0719 19:39:08.780935   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.785161   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:39:08.785233   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:39:08.818139   68617 cri.go:89] found id: "560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e"
	I0719 19:39:08.818161   68617 cri.go:89] found id: ""
	I0719 19:39:08.818170   68617 logs.go:276] 1 containers: [560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e]
	I0719 19:39:08.818225   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.822066   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:39:08.822116   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:39:08.855310   68617 cri.go:89] found id: "4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e"
	I0719 19:39:08.855333   68617 cri.go:89] found id: ""
	I0719 19:39:08.855344   68617 logs.go:276] 1 containers: [4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e]
	I0719 19:39:08.855402   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.859647   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:39:08.859716   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:39:08.897948   68617 cri.go:89] found id: "4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498"
	I0719 19:39:08.897973   68617 cri.go:89] found id: ""
	I0719 19:39:08.897981   68617 logs.go:276] 1 containers: [4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498]
	I0719 19:39:08.898030   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.901861   68617 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:39:08.901909   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:39:08.940408   68617 cri.go:89] found id: ""
	I0719 19:39:08.940440   68617 logs.go:276] 0 containers: []
	W0719 19:39:08.940451   68617 logs.go:278] No container was found matching "kindnet"
	I0719 19:39:08.940458   68617 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 19:39:08.940516   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 19:39:08.979077   68617 cri.go:89] found id: "aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040"
	I0719 19:39:08.979104   68617 cri.go:89] found id: "f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91"
	I0719 19:39:08.979109   68617 cri.go:89] found id: ""
	I0719 19:39:08.979118   68617 logs.go:276] 2 containers: [aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040 f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91]
	I0719 19:39:08.979185   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.983947   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.987604   68617 logs.go:123] Gathering logs for coredns [9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb] ...
	I0719 19:39:08.987624   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb"
	I0719 19:39:09.021803   68617 logs.go:123] Gathering logs for kube-proxy [4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e] ...
	I0719 19:39:09.021834   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e"
	I0719 19:39:09.064650   68617 logs.go:123] Gathering logs for kube-controller-manager [4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498] ...
	I0719 19:39:09.064677   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498"
	I0719 19:39:09.120779   68617 logs.go:123] Gathering logs for storage-provisioner [f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91] ...
	I0719 19:39:09.120834   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91"
	I0719 19:39:09.155898   68617 logs.go:123] Gathering logs for kubelet ...
	I0719 19:39:09.155926   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:39:09.210461   68617 logs.go:123] Gathering logs for dmesg ...
	I0719 19:39:09.210490   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:39:09.224154   68617 logs.go:123] Gathering logs for kube-apiserver [82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5] ...
	I0719 19:39:09.224181   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5"
	I0719 19:39:09.266941   68617 logs.go:123] Gathering logs for storage-provisioner [aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040] ...
	I0719 19:39:09.266974   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040"
	I0719 19:39:09.306982   68617 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:39:09.307011   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:39:09.689805   68617 logs.go:123] Gathering logs for container status ...
	I0719 19:39:09.689848   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:39:09.733995   68617 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:39:09.734034   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 19:39:09.842306   68617 logs.go:123] Gathering logs for etcd [46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c] ...
	I0719 19:39:09.842339   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c"
	I0719 19:39:09.889811   68617 logs.go:123] Gathering logs for kube-scheduler [560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e] ...
	I0719 19:39:09.889842   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e"
	I0719 19:39:08.056065   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:39:08.070370   68995 kubeadm.go:597] duration metric: took 4m4.520130366s to restartPrimaryControlPlane
	W0719 19:39:08.070441   68995 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 19:39:08.070473   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 19:39:12.432441   68617 system_pods.go:59] 8 kube-system pods found
	I0719 19:39:12.432467   68617 system_pods.go:61] "coredns-7db6d8ff4d-7h48w" [dd691a13-e571-4148-bef0-ba0552277312] Running
	I0719 19:39:12.432472   68617 system_pods.go:61] "etcd-embed-certs-889918" [68eae3c2-11de-4c70-bc28-1f34e1e868fc] Running
	I0719 19:39:12.432476   68617 system_pods.go:61] "kube-apiserver-embed-certs-889918" [48f50846-bc9e-444e-a569-98d28550dcff] Running
	I0719 19:39:12.432480   68617 system_pods.go:61] "kube-controller-manager-embed-certs-889918" [92e4da69-acef-44a2-9301-726c70294727] Running
	I0719 19:39:12.432484   68617 system_pods.go:61] "kube-proxy-5fcgs" [67ef345b-f2ef-4e18-a55c-91e2c08a37b8] Running
	I0719 19:39:12.432487   68617 system_pods.go:61] "kube-scheduler-embed-certs-889918" [f5d74873-1493-42f7-92a7-5d2b98929025] Running
	I0719 19:39:12.432493   68617 system_pods.go:61] "metrics-server-569cc877fc-sfc85" [fc2a1efe-1728-42d9-bd3d-9eb29af783e9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:39:12.432498   68617 system_pods.go:61] "storage-provisioner" [32274055-6a2d-4cf9-8ae8-3c5f7cdc66db] Running
	I0719 19:39:12.432503   68617 system_pods.go:74] duration metric: took 3.779953026s to wait for pod list to return data ...
	I0719 19:39:12.432509   68617 default_sa.go:34] waiting for default service account to be created ...
	I0719 19:39:12.434809   68617 default_sa.go:45] found service account: "default"
	I0719 19:39:12.434826   68617 default_sa.go:55] duration metric: took 2.3111ms for default service account to be created ...
	I0719 19:39:12.434833   68617 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 19:39:12.440652   68617 system_pods.go:86] 8 kube-system pods found
	I0719 19:39:12.440673   68617 system_pods.go:89] "coredns-7db6d8ff4d-7h48w" [dd691a13-e571-4148-bef0-ba0552277312] Running
	I0719 19:39:12.440678   68617 system_pods.go:89] "etcd-embed-certs-889918" [68eae3c2-11de-4c70-bc28-1f34e1e868fc] Running
	I0719 19:39:12.440683   68617 system_pods.go:89] "kube-apiserver-embed-certs-889918" [48f50846-bc9e-444e-a569-98d28550dcff] Running
	I0719 19:39:12.440689   68617 system_pods.go:89] "kube-controller-manager-embed-certs-889918" [92e4da69-acef-44a2-9301-726c70294727] Running
	I0719 19:39:12.440696   68617 system_pods.go:89] "kube-proxy-5fcgs" [67ef345b-f2ef-4e18-a55c-91e2c08a37b8] Running
	I0719 19:39:12.440707   68617 system_pods.go:89] "kube-scheduler-embed-certs-889918" [f5d74873-1493-42f7-92a7-5d2b98929025] Running
	I0719 19:39:12.440717   68617 system_pods.go:89] "metrics-server-569cc877fc-sfc85" [fc2a1efe-1728-42d9-bd3d-9eb29af783e9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:39:12.440728   68617 system_pods.go:89] "storage-provisioner" [32274055-6a2d-4cf9-8ae8-3c5f7cdc66db] Running
	I0719 19:39:12.440737   68617 system_pods.go:126] duration metric: took 5.899051ms to wait for k8s-apps to be running ...
	I0719 19:39:12.440748   68617 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 19:39:12.440789   68617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:39:12.459186   68617 system_svc.go:56] duration metric: took 18.42941ms WaitForService to wait for kubelet
	I0719 19:39:12.459214   68617 kubeadm.go:582] duration metric: took 4m23.472080305s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 19:39:12.459237   68617 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:39:12.462522   68617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:39:12.462542   68617 node_conditions.go:123] node cpu capacity is 2
	I0719 19:39:12.462552   68617 node_conditions.go:105] duration metric: took 3.31009ms to run NodePressure ...
	I0719 19:39:12.462568   68617 start.go:241] waiting for startup goroutines ...
	I0719 19:39:12.462577   68617 start.go:246] waiting for cluster config update ...
	I0719 19:39:12.462590   68617 start.go:255] writing updated cluster config ...
	I0719 19:39:12.462907   68617 ssh_runner.go:195] Run: rm -f paused
	I0719 19:39:12.529076   68617 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 19:39:12.531440   68617 out.go:177] * Done! kubectl is now configured to use "embed-certs-889918" cluster and "default" namespace by default
	I0719 19:39:08.251071   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:10.251424   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:12.252078   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:12.404491   68995 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.333989512s)
	I0719 19:39:12.404566   68995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:39:12.418839   68995 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:39:12.430297   68995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:39:12.440268   68995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:39:12.440287   68995 kubeadm.go:157] found existing configuration files:
	
	I0719 19:39:12.440335   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:39:12.449116   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:39:12.449184   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:39:12.460572   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:39:12.471305   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:39:12.471362   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:39:12.482167   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:39:12.493823   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:39:12.493905   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:39:12.505544   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:39:12.517178   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:39:12.517240   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:39:12.529271   68995 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 19:39:12.620678   68995 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0719 19:39:12.620745   68995 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 19:39:12.758666   68995 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 19:39:12.758855   68995 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 19:39:12.759023   68995 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 19:39:12.939168   68995 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 19:39:08.482667   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:10.983631   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:12.941111   68995 out.go:204]   - Generating certificates and keys ...
	I0719 19:39:12.941226   68995 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 19:39:12.941319   68995 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 19:39:12.941423   68995 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 19:39:12.941537   68995 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 19:39:12.941651   68995 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 19:39:12.941754   68995 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 19:39:12.941851   68995 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 19:39:12.941935   68995 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 19:39:12.942048   68995 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 19:39:12.942156   68995 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 19:39:12.942238   68995 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 19:39:12.942336   68995 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 19:39:13.024175   68995 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 19:39:13.148395   68995 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 19:39:13.342824   68995 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 19:39:13.440755   68995 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 19:39:13.455869   68995 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 19:39:13.456830   68995 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 19:39:13.456907   68995 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 19:39:13.600063   68995 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 19:39:13.602239   68995 out.go:204]   - Booting up control plane ...
	I0719 19:39:13.602381   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 19:39:13.608875   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 19:39:13.609791   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 19:39:13.610664   68995 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 19:39:13.612681   68995 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 19:39:14.252185   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:16.252337   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:13.484347   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:15.983299   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:17.983517   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:18.751634   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:21.252050   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:20.482791   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:21.477050   68536 pod_ready.go:81] duration metric: took 4m0.000605179s for pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace to be "Ready" ...
	E0719 19:39:21.477076   68536 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace to be "Ready" (will not retry!)
	I0719 19:39:21.477094   68536 pod_ready.go:38] duration metric: took 4m14.545379745s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:39:21.477123   68536 kubeadm.go:597] duration metric: took 5m1.629449662s to restartPrimaryControlPlane
	W0719 19:39:21.477193   68536 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 19:39:21.477215   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 19:39:23.753986   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:26.251761   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:28.252580   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:30.755898   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:33.251879   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:35.750822   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:37.752455   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:40.251762   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:42.752421   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:45.251253   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:47.752059   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:50.253519   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:51.745969   68019 pod_ready.go:81] duration metric: took 4m0.000948888s for pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace to be "Ready" ...
	E0719 19:39:51.746010   68019 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0719 19:39:51.746030   68019 pod_ready.go:38] duration metric: took 4m12.537161917s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:39:51.746056   68019 kubeadm.go:597] duration metric: took 4m19.596455979s to restartPrimaryControlPlane
	W0719 19:39:51.746106   68019 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 19:39:51.746131   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 19:39:52.520817   68536 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.043581976s)
	I0719 19:39:52.520887   68536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:39:52.536043   68536 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:39:52.545993   68536 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:39:52.555648   68536 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:39:52.555669   68536 kubeadm.go:157] found existing configuration files:
	
	I0719 19:39:52.555711   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0719 19:39:52.564602   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:39:52.564712   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:39:52.573899   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0719 19:39:52.582537   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:39:52.582588   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:39:52.591303   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0719 19:39:52.600820   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:39:52.600868   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:39:52.610312   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0719 19:39:52.618924   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:39:52.618970   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:39:52.627840   68536 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 19:39:52.683442   68536 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0719 19:39:52.683530   68536 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 19:39:52.826305   68536 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 19:39:52.826452   68536 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 19:39:52.826620   68536 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 19:39:53.042595   68536 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 19:39:53.044702   68536 out.go:204]   - Generating certificates and keys ...
	I0719 19:39:53.044784   68536 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 19:39:53.044870   68536 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 19:39:53.044980   68536 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 19:39:53.045077   68536 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 19:39:53.045156   68536 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 19:39:53.045204   68536 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 19:39:53.045263   68536 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 19:39:53.045340   68536 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 19:39:53.045435   68536 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 19:39:53.045541   68536 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 19:39:53.045606   68536 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 19:39:53.045764   68536 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 19:39:53.222737   68536 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 19:39:53.489072   68536 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 19:39:53.740066   68536 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 19:39:53.822505   68536 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 19:39:53.971335   68536 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 19:39:53.971975   68536 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 19:39:53.976245   68536 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 19:39:53.614108   68995 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0719 19:39:53.614700   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:39:53.614946   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:39:53.978300   68536 out.go:204]   - Booting up control plane ...
	I0719 19:39:53.978437   68536 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 19:39:53.978565   68536 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 19:39:53.978663   68536 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 19:39:53.996201   68536 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 19:39:53.997047   68536 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 19:39:53.997148   68536 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 19:39:54.130113   68536 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 19:39:54.130245   68536 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 19:39:55.132570   68536 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002320915s
	I0719 19:39:55.132686   68536 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 19:39:59.636140   68536 kubeadm.go:310] [api-check] The API server is healthy after 4.50221298s
	I0719 19:39:59.654383   68536 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 19:39:59.676263   68536 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 19:39:59.700938   68536 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 19:39:59.701195   68536 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-578533 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 19:39:59.712009   68536 kubeadm.go:310] [bootstrap-token] Using token: kew56y.gotqvfalggpgzbky
	I0719 19:39:59.713486   68536 out.go:204]   - Configuring RBAC rules ...
	I0719 19:39:59.713629   68536 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 19:39:59.720605   68536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 19:39:59.726901   68536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 19:39:59.730021   68536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 19:39:59.733042   68536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 19:39:59.736260   68536 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 19:40:00.042122   68536 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 19:40:00.496191   68536 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 19:40:01.043927   68536 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 19:40:01.043969   68536 kubeadm.go:310] 
	I0719 19:40:01.044072   68536 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 19:40:01.044086   68536 kubeadm.go:310] 
	I0719 19:40:01.044173   68536 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 19:40:01.044181   68536 kubeadm.go:310] 
	I0719 19:40:01.044201   68536 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 19:40:01.044250   68536 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 19:40:01.044339   68536 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 19:40:01.044356   68536 kubeadm.go:310] 
	I0719 19:40:01.044493   68536 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 19:40:01.044513   68536 kubeadm.go:310] 
	I0719 19:40:01.044584   68536 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 19:40:01.044594   68536 kubeadm.go:310] 
	I0719 19:40:01.044670   68536 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 19:40:01.044774   68536 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 19:40:01.044874   68536 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 19:40:01.044888   68536 kubeadm.go:310] 
	I0719 19:40:01.044975   68536 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 19:40:01.045047   68536 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 19:40:01.045058   68536 kubeadm.go:310] 
	I0719 19:40:01.045169   68536 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token kew56y.gotqvfalggpgzbky \
	I0719 19:40:01.045331   68536 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 \
	I0719 19:40:01.045367   68536 kubeadm.go:310] 	--control-plane 
	I0719 19:40:01.045372   68536 kubeadm.go:310] 
	I0719 19:40:01.045496   68536 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 19:40:01.045506   68536 kubeadm.go:310] 
	I0719 19:40:01.045607   68536 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token kew56y.gotqvfalggpgzbky \
	I0719 19:40:01.045766   68536 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 
	I0719 19:40:01.045950   68536 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 19:40:01.045974   68536 cni.go:84] Creating CNI manager for ""
	I0719 19:40:01.045983   68536 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:40:01.047577   68536 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 19:39:58.615471   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:39:58.615687   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:40:01.048902   68536 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 19:40:01.061165   68536 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 19:40:01.083745   68536 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 19:40:01.083880   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:01.083893   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-578533 minikube.k8s.io/updated_at=2024_07_19T19_40_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221 minikube.k8s.io/name=default-k8s-diff-port-578533 minikube.k8s.io/primary=true
	I0719 19:40:01.115489   68536 ops.go:34] apiserver oom_adj: -16
	I0719 19:40:01.288776   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:01.789023   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:02.289623   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:02.789147   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:03.289112   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:03.789167   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:04.289092   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:04.788811   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:05.289732   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:05.789486   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:06.289062   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:06.789641   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:07.289713   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:07.789100   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:08.289752   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:08.616294   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:40:08.616549   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:40:08.789060   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:09.288783   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:09.789011   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:10.289587   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:10.789028   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:11.288947   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:11.789756   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:12.288867   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:12.789023   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:13.289682   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:13.789104   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:13.872731   68536 kubeadm.go:1113] duration metric: took 12.788952534s to wait for elevateKubeSystemPrivileges
	I0719 19:40:13.872760   68536 kubeadm.go:394] duration metric: took 5m54.069849946s to StartCluster
	I0719 19:40:13.872777   68536 settings.go:142] acquiring lock: {Name:mkcf95271f2ac91a55c2f4ae0f8a0ccaa24777be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:40:13.872848   68536 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:40:13.874598   68536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/kubeconfig: {Name:mk0bbb5f0de0e816a151363ad4427bbf9de52f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:40:13.874875   68536 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.104 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 19:40:13.875049   68536 config.go:182] Loaded profile config "default-k8s-diff-port-578533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:40:13.875047   68536 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 19:40:13.875107   68536 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-578533"
	I0719 19:40:13.875116   68536 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-578533"
	I0719 19:40:13.875140   68536 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-578533"
	I0719 19:40:13.875142   68536 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-578533"
	I0719 19:40:13.875198   68536 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-578533"
	W0719 19:40:13.875211   68536 addons.go:243] addon metrics-server should already be in state true
	I0719 19:40:13.875141   68536 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-578533"
	W0719 19:40:13.875237   68536 addons.go:243] addon storage-provisioner should already be in state true
	I0719 19:40:13.875252   68536 host.go:66] Checking if "default-k8s-diff-port-578533" exists ...
	I0719 19:40:13.875272   68536 host.go:66] Checking if "default-k8s-diff-port-578533" exists ...
	I0719 19:40:13.875610   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.875620   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.875641   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.875648   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.875760   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.875808   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.876607   68536 out.go:177] * Verifying Kubernetes components...
	I0719 19:40:13.878074   68536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:40:13.891861   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44441
	I0719 19:40:13.891871   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33215
	I0719 19:40:13.892275   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.892348   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.892756   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.892776   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.892861   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.892880   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.893123   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.893188   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.893600   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.893620   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.893708   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.893741   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.895892   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33737
	I0719 19:40:13.896304   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.896825   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.896849   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.897241   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.897455   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetState
	I0719 19:40:13.901471   68536 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-578533"
	W0719 19:40:13.901492   68536 addons.go:243] addon default-storageclass should already be in state true
	I0719 19:40:13.901520   68536 host.go:66] Checking if "default-k8s-diff-port-578533" exists ...
	I0719 19:40:13.901886   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.901916   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.909214   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44955
	I0719 19:40:13.909602   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.909706   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41381
	I0719 19:40:13.910014   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.910034   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.910088   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.910307   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.910493   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetState
	I0719 19:40:13.910626   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.910647   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.911481   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.911755   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetState
	I0719 19:40:13.913629   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:40:13.914010   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:40:13.915656   68536 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0719 19:40:13.915658   68536 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:40:13.916749   68536 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 19:40:13.916766   68536 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 19:40:13.916796   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:40:13.917023   68536 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:40:13.917042   68536 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 19:40:13.917059   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:40:13.920356   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:40:13.921022   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:40:13.921051   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:40:13.921100   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:40:13.921342   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:40:13.921505   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:40:13.921690   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:40:13.921838   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:40:13.922025   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:40:13.922047   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:40:13.922369   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:40:13.922515   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:40:13.922679   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:40:13.922815   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:40:13.926929   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43701
	I0719 19:40:13.927323   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.927751   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.927771   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.928249   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.928833   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.928858   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.944683   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33103
	I0719 19:40:13.945111   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.945633   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.945660   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.946161   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.946386   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetState
	I0719 19:40:13.948243   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:40:13.948454   68536 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 19:40:13.948469   68536 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 19:40:13.948490   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:40:13.951289   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:40:13.951582   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:40:13.951617   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:40:13.951803   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:40:13.952029   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:40:13.952201   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:40:13.952391   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:40:14.070828   68536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:40:14.091145   68536 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-578533" to be "Ready" ...
	I0719 19:40:14.110080   68536 node_ready.go:49] node "default-k8s-diff-port-578533" has status "Ready":"True"
	I0719 19:40:14.110107   68536 node_ready.go:38] duration metric: took 18.918971ms for node "default-k8s-diff-port-578533" to be "Ready" ...
	I0719 19:40:14.110119   68536 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:40:14.119802   68536 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.126113   68536 pod_ready.go:92] pod "etcd-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:14.126142   68536 pod_ready.go:81] duration metric: took 6.313891ms for pod "etcd-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.126155   68536 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.141789   68536 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:14.141819   68536 pod_ready.go:81] duration metric: took 15.656544ms for pod "kube-apiserver-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.141834   68536 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.156519   68536 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:14.156551   68536 pod_ready.go:81] duration metric: took 14.707971ms for pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.156565   68536 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xd6jl" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.204678   68536 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 19:40:14.204699   68536 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0719 19:40:14.243288   68536 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 19:40:14.245353   68536 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 19:40:14.245379   68536 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 19:40:14.251097   68536 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:40:14.282349   68536 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 19:40:14.282380   68536 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 19:40:14.324904   68536 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 19:40:14.538669   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:14.538706   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:14.539085   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Closing plugin on server side
	I0719 19:40:14.539097   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:14.539112   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:14.539127   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:14.539141   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:14.539429   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:14.539450   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:14.539487   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Closing plugin on server side
	I0719 19:40:14.544896   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:14.544916   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:14.545172   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Closing plugin on server side
	I0719 19:40:14.545177   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:14.545191   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:15.063896   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:15.063917   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:15.064241   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:15.064253   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Closing plugin on server side
	I0719 19:40:15.064259   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:15.064276   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:15.064284   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:15.064494   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:15.064517   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:15.658117   68536 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.333159998s)
	I0719 19:40:15.658174   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:15.658189   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:15.658470   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:15.658491   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:15.658502   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:15.658512   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:15.658736   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:15.658753   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:15.658764   68536 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-578533"
	I0719 19:40:15.661136   68536 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0719 19:40:15.662344   68536 addons.go:510] duration metric: took 1.787285769s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0719 19:40:15.679066   68536 pod_ready.go:92] pod "kube-proxy-xd6jl" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:15.679088   68536 pod_ready.go:81] duration metric: took 1.52251532s for pod "kube-proxy-xd6jl" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:15.679114   68536 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:15.695199   68536 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:15.695221   68536 pod_ready.go:81] duration metric: took 16.099506ms for pod "kube-scheduler-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:15.695229   68536 pod_ready.go:38] duration metric: took 1.585098776s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:40:15.695241   68536 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:40:15.695297   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:40:15.746226   68536 api_server.go:72] duration metric: took 1.871305968s to wait for apiserver process to appear ...
	I0719 19:40:15.746254   68536 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:40:15.746277   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:40:15.761521   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 200:
	ok
	I0719 19:40:15.762619   68536 api_server.go:141] control plane version: v1.30.3
	I0719 19:40:15.762646   68536 api_server.go:131] duration metric: took 16.383582ms to wait for apiserver health ...
	I0719 19:40:15.762656   68536 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:40:15.900579   68536 system_pods.go:59] 9 kube-system pods found
	I0719 19:40:15.900627   68536 system_pods.go:61] "coredns-7db6d8ff4d-6qndw" [2b80cbbf-e1de-449d-affb-90c7e626e11a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 19:40:15.900639   68536 system_pods.go:61] "coredns-7db6d8ff4d-gmljj" [e3c23a75-7c98-4704-b159-465a46e71deb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 19:40:15.900648   68536 system_pods.go:61] "etcd-default-k8s-diff-port-578533" [2667fb3b-07c7-4feb-9c47-aec61d610319] Running
	I0719 19:40:15.900657   68536 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-578533" [54d5ed69-0572-4d75-b2c2-868a2d48f048] Running
	I0719 19:40:15.900664   68536 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-578533" [ba65a51a-7ce6-4ba9-ad87-b2763c49732f] Running
	I0719 19:40:15.900669   68536 system_pods.go:61] "kube-proxy-xd6jl" [699a18fd-c6e6-4c85-9799-0630bb7932b9] Running
	I0719 19:40:15.900675   68536 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-578533" [28fe9666-6e36-479a-9908-0ed40fbe9655] Running
	I0719 19:40:15.900687   68536 system_pods.go:61] "metrics-server-569cc877fc-kz2bz" [dc109926-e072-47e1-944c-a9614741fdaa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:40:15.900701   68536 system_pods.go:61] "storage-provisioner" [732e14b5-3120-491f-be0f-2d2ffcd7a799] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 19:40:15.900711   68536 system_pods.go:74] duration metric: took 138.04766ms to wait for pod list to return data ...
	I0719 19:40:15.900726   68536 default_sa.go:34] waiting for default service account to be created ...
	I0719 19:40:16.094864   68536 default_sa.go:45] found service account: "default"
	I0719 19:40:16.094889   68536 default_sa.go:55] duration metric: took 194.152947ms for default service account to be created ...
	I0719 19:40:16.094899   68536 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 19:40:16.297477   68536 system_pods.go:86] 9 kube-system pods found
	I0719 19:40:16.297508   68536 system_pods.go:89] "coredns-7db6d8ff4d-6qndw" [2b80cbbf-e1de-449d-affb-90c7e626e11a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 19:40:16.297515   68536 system_pods.go:89] "coredns-7db6d8ff4d-gmljj" [e3c23a75-7c98-4704-b159-465a46e71deb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 19:40:16.297521   68536 system_pods.go:89] "etcd-default-k8s-diff-port-578533" [2667fb3b-07c7-4feb-9c47-aec61d610319] Running
	I0719 19:40:16.297526   68536 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-578533" [54d5ed69-0572-4d75-b2c2-868a2d48f048] Running
	I0719 19:40:16.297531   68536 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-578533" [ba65a51a-7ce6-4ba9-ad87-b2763c49732f] Running
	I0719 19:40:16.297535   68536 system_pods.go:89] "kube-proxy-xd6jl" [699a18fd-c6e6-4c85-9799-0630bb7932b9] Running
	I0719 19:40:16.297539   68536 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-578533" [28fe9666-6e36-479a-9908-0ed40fbe9655] Running
	I0719 19:40:16.297543   68536 system_pods.go:89] "metrics-server-569cc877fc-kz2bz" [dc109926-e072-47e1-944c-a9614741fdaa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:40:16.297550   68536 system_pods.go:89] "storage-provisioner" [732e14b5-3120-491f-be0f-2d2ffcd7a799] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 19:40:16.297572   68536 retry.go:31] will retry after 294.572454ms: missing components: kube-dns
	I0719 19:40:16.599274   68536 system_pods.go:86] 9 kube-system pods found
	I0719 19:40:16.599306   68536 system_pods.go:89] "coredns-7db6d8ff4d-6qndw" [2b80cbbf-e1de-449d-affb-90c7e626e11a] Running
	I0719 19:40:16.599315   68536 system_pods.go:89] "coredns-7db6d8ff4d-gmljj" [e3c23a75-7c98-4704-b159-465a46e71deb] Running
	I0719 19:40:16.599343   68536 system_pods.go:89] "etcd-default-k8s-diff-port-578533" [2667fb3b-07c7-4feb-9c47-aec61d610319] Running
	I0719 19:40:16.599351   68536 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-578533" [54d5ed69-0572-4d75-b2c2-868a2d48f048] Running
	I0719 19:40:16.599358   68536 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-578533" [ba65a51a-7ce6-4ba9-ad87-b2763c49732f] Running
	I0719 19:40:16.599365   68536 system_pods.go:89] "kube-proxy-xd6jl" [699a18fd-c6e6-4c85-9799-0630bb7932b9] Running
	I0719 19:40:16.599371   68536 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-578533" [28fe9666-6e36-479a-9908-0ed40fbe9655] Running
	I0719 19:40:16.599386   68536 system_pods.go:89] "metrics-server-569cc877fc-kz2bz" [dc109926-e072-47e1-944c-a9614741fdaa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:40:16.599392   68536 system_pods.go:89] "storage-provisioner" [732e14b5-3120-491f-be0f-2d2ffcd7a799] Running
	I0719 19:40:16.599408   68536 system_pods.go:126] duration metric: took 504.501449ms to wait for k8s-apps to be running ...
	I0719 19:40:16.599418   68536 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 19:40:16.599461   68536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:40:16.613644   68536 system_svc.go:56] duration metric: took 14.216133ms WaitForService to wait for kubelet
	I0719 19:40:16.613673   68536 kubeadm.go:582] duration metric: took 2.73876141s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 19:40:16.613691   68536 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:40:16.695470   68536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:40:16.695494   68536 node_conditions.go:123] node cpu capacity is 2
	I0719 19:40:16.695503   68536 node_conditions.go:105] duration metric: took 81.807505ms to run NodePressure ...
	I0719 19:40:16.695513   68536 start.go:241] waiting for startup goroutines ...
	I0719 19:40:16.695523   68536 start.go:246] waiting for cluster config update ...
	I0719 19:40:16.695533   68536 start.go:255] writing updated cluster config ...
	I0719 19:40:16.695817   68536 ssh_runner.go:195] Run: rm -f paused
	I0719 19:40:16.743733   68536 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 19:40:16.746063   68536 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-578533" cluster and "default" namespace by default
	I0719 19:40:17.805890   68019 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.059736517s)
	I0719 19:40:17.805973   68019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:40:17.821097   68019 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:40:17.831114   68019 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:40:17.840896   68019 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:40:17.840926   68019 kubeadm.go:157] found existing configuration files:
	
	I0719 19:40:17.840974   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:40:17.850003   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:40:17.850059   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:40:17.859304   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:40:17.868663   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:40:17.868711   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:40:17.877527   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:40:17.886554   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:40:17.886608   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:40:17.895713   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:40:17.904287   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:40:17.904352   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:40:17.913115   68019 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 19:40:17.956895   68019 kubeadm.go:310] W0719 19:40:17.928980    2926 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0719 19:40:17.958338   68019 kubeadm.go:310] W0719 19:40:17.930488    2926 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0719 19:40:18.073705   68019 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 19:40:26.630439   68019 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0719 19:40:26.630525   68019 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 19:40:26.630649   68019 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 19:40:26.630765   68019 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 19:40:26.630913   68019 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0719 19:40:26.631023   68019 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 19:40:26.632660   68019 out.go:204]   - Generating certificates and keys ...
	I0719 19:40:26.632745   68019 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 19:40:26.632833   68019 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 19:40:26.632906   68019 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 19:40:26.632975   68019 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 19:40:26.633093   68019 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 19:40:26.633184   68019 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 19:40:26.633270   68019 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 19:40:26.633378   68019 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 19:40:26.633474   68019 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 19:40:26.633572   68019 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 19:40:26.633627   68019 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 19:40:26.633705   68019 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 19:40:26.633780   68019 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 19:40:26.633868   68019 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 19:40:26.633942   68019 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 19:40:26.634024   68019 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 19:40:26.634119   68019 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 19:40:26.634250   68019 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 19:40:26.634354   68019 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 19:40:26.635957   68019 out.go:204]   - Booting up control plane ...
	I0719 19:40:26.636062   68019 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 19:40:26.636163   68019 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 19:40:26.636259   68019 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 19:40:26.636381   68019 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 19:40:26.636505   68019 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 19:40:26.636570   68019 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 19:40:26.636727   68019 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 19:40:26.636813   68019 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 19:40:26.636888   68019 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002206871s
	I0719 19:40:26.637000   68019 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 19:40:26.637088   68019 kubeadm.go:310] [api-check] The API server is healthy after 5.002825608s
	I0719 19:40:26.637239   68019 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 19:40:26.637387   68019 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 19:40:26.637470   68019 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 19:40:26.637681   68019 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-971041 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 19:40:26.637737   68019 kubeadm.go:310] [bootstrap-token] Using token: r7dsc0.erulzfzieqtgxnlo
	I0719 19:40:26.638982   68019 out.go:204]   - Configuring RBAC rules ...
	I0719 19:40:26.639079   68019 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 19:40:26.639165   68019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 19:40:26.639306   68019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 19:40:26.639427   68019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 19:40:26.639522   68019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 19:40:26.639599   68019 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 19:40:26.639692   68019 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 19:40:26.639731   68019 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 19:40:26.639770   68019 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 19:40:26.639779   68019 kubeadm.go:310] 
	I0719 19:40:26.639859   68019 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 19:40:26.639868   68019 kubeadm.go:310] 
	I0719 19:40:26.639935   68019 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 19:40:26.639941   68019 kubeadm.go:310] 
	I0719 19:40:26.639974   68019 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 19:40:26.640070   68019 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 19:40:26.640145   68019 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 19:40:26.640161   68019 kubeadm.go:310] 
	I0719 19:40:26.640242   68019 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 19:40:26.640249   68019 kubeadm.go:310] 
	I0719 19:40:26.640288   68019 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 19:40:26.640303   68019 kubeadm.go:310] 
	I0719 19:40:26.640386   68019 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 19:40:26.640476   68019 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 19:40:26.640572   68019 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 19:40:26.640582   68019 kubeadm.go:310] 
	I0719 19:40:26.640655   68019 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 19:40:26.640733   68019 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 19:40:26.640745   68019 kubeadm.go:310] 
	I0719 19:40:26.640827   68019 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token r7dsc0.erulzfzieqtgxnlo \
	I0719 19:40:26.640965   68019 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 \
	I0719 19:40:26.641007   68019 kubeadm.go:310] 	--control-plane 
	I0719 19:40:26.641015   68019 kubeadm.go:310] 
	I0719 19:40:26.641105   68019 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 19:40:26.641117   68019 kubeadm.go:310] 
	I0719 19:40:26.641187   68019 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token r7dsc0.erulzfzieqtgxnlo \
	I0719 19:40:26.641282   68019 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 
	I0719 19:40:26.641293   68019 cni.go:84] Creating CNI manager for ""
	I0719 19:40:26.641302   68019 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:40:26.642788   68019 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 19:40:26.643983   68019 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 19:40:26.655231   68019 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 19:40:26.673487   68019 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 19:40:26.673555   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:26.673573   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-971041 minikube.k8s.io/updated_at=2024_07_19T19_40_26_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221 minikube.k8s.io/name=no-preload-971041 minikube.k8s.io/primary=true
	I0719 19:40:26.702916   68019 ops.go:34] apiserver oom_adj: -16
	I0719 19:40:26.871642   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:27.372636   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:27.871695   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:28.372395   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:28.871975   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:29.371869   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:29.872364   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:30.372618   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:30.871974   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:31.372607   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:31.484697   68019 kubeadm.go:1113] duration metric: took 4.811203988s to wait for elevateKubeSystemPrivileges
	I0719 19:40:31.484735   68019 kubeadm.go:394] duration metric: took 4m59.38336524s to StartCluster
	I0719 19:40:31.484758   68019 settings.go:142] acquiring lock: {Name:mkcf95271f2ac91a55c2f4ae0f8a0ccaa24777be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:40:31.484852   68019 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:40:31.487571   68019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/kubeconfig: {Name:mk0bbb5f0de0e816a151363ad4427bbf9de52f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:40:31.487903   68019 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.119 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 19:40:31.488046   68019 config.go:182] Loaded profile config "no-preload-971041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 19:40:31.488017   68019 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 19:40:31.488091   68019 addons.go:69] Setting storage-provisioner=true in profile "no-preload-971041"
	I0719 19:40:31.488105   68019 addons.go:69] Setting default-storageclass=true in profile "no-preload-971041"
	I0719 19:40:31.488136   68019 addons.go:234] Setting addon storage-provisioner=true in "no-preload-971041"
	I0719 19:40:31.488136   68019 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-971041"
	W0719 19:40:31.488145   68019 addons.go:243] addon storage-provisioner should already be in state true
	I0719 19:40:31.488173   68019 host.go:66] Checking if "no-preload-971041" exists ...
	I0719 19:40:31.488097   68019 addons.go:69] Setting metrics-server=true in profile "no-preload-971041"
	I0719 19:40:31.488220   68019 addons.go:234] Setting addon metrics-server=true in "no-preload-971041"
	W0719 19:40:31.488229   68019 addons.go:243] addon metrics-server should already be in state true
	I0719 19:40:31.488253   68019 host.go:66] Checking if "no-preload-971041" exists ...
	I0719 19:40:31.488560   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.488566   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.488568   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.488588   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.488589   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.488590   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.489333   68019 out.go:177] * Verifying Kubernetes components...
	I0719 19:40:31.490709   68019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:40:31.505073   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35949
	I0719 19:40:31.505121   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36213
	I0719 19:40:31.505243   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34987
	I0719 19:40:31.505554   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.505575   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.505651   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.506136   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.506161   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.506138   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.506177   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.506369   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.506390   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.506521   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.506582   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.506762   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.506950   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetState
	I0719 19:40:31.507074   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.507101   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.507097   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.507134   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.510086   68019 addons.go:234] Setting addon default-storageclass=true in "no-preload-971041"
	W0719 19:40:31.510104   68019 addons.go:243] addon default-storageclass should already be in state true
	I0719 19:40:31.510154   68019 host.go:66] Checking if "no-preload-971041" exists ...
	I0719 19:40:31.510697   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.510727   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.523120   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36047
	I0719 19:40:31.523463   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45285
	I0719 19:40:31.523899   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.523950   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.524463   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.524479   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.524580   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.524590   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.524931   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.524973   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.525131   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetState
	I0719 19:40:31.525164   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetState
	I0719 19:40:31.526929   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:40:31.527498   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:40:31.527764   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42975
	I0719 19:40:31.528531   68019 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0719 19:40:31.528562   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.529323   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.529340   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.529389   68019 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:40:28.617451   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:40:28.617640   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:40:31.529719   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.530357   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.530369   68019 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 19:40:31.530383   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.530386   68019 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 19:40:31.530405   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:40:31.531478   68019 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:40:31.531494   68019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 19:40:31.531509   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:40:31.533843   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:40:31.534251   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:40:31.534279   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:40:31.534542   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:40:31.534701   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:40:31.534827   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:40:31.534968   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:40:31.538822   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:40:31.539296   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:40:31.539468   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:40:31.539598   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:40:31.539745   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:40:31.539894   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:40:31.540051   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:40:31.547930   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40325
	I0719 19:40:31.548415   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.548833   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.548852   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.549151   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.549309   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetState
	I0719 19:40:31.550633   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:40:31.550855   68019 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 19:40:31.550869   68019 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 19:40:31.550881   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:40:31.553528   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:40:31.554109   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:40:31.554134   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:40:31.554298   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:40:31.554650   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:40:31.554842   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:40:31.554962   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:40:31.716869   68019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:40:31.740991   68019 node_ready.go:35] waiting up to 6m0s for node "no-preload-971041" to be "Ready" ...
	I0719 19:40:31.749672   68019 node_ready.go:49] node "no-preload-971041" has status "Ready":"True"
	I0719 19:40:31.749700   68019 node_ready.go:38] duration metric: took 8.681441ms for node "no-preload-971041" to be "Ready" ...
	I0719 19:40:31.749713   68019 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:40:31.755037   68019 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-7kdqd" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:31.865867   68019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:40:31.866793   68019 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 19:40:31.866807   68019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0719 19:40:31.876604   68019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 19:40:31.890924   68019 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 19:40:31.890947   68019 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 19:40:31.930299   68019 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 19:40:31.930326   68019 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 19:40:31.982650   68019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 19:40:32.488961   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:32.488991   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:32.489025   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:32.489051   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:32.489355   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:32.489372   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:32.489382   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:32.489390   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:32.489398   68019 main.go:141] libmachine: (no-preload-971041) DBG | Closing plugin on server side
	I0719 19:40:32.489424   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:32.489431   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:32.489439   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:32.489446   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:32.489619   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:32.489639   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:32.489676   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:32.489689   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:32.489690   68019 main.go:141] libmachine: (no-preload-971041) DBG | Closing plugin on server side
	I0719 19:40:32.506595   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:32.506613   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:32.506974   68019 main.go:141] libmachine: (no-preload-971041) DBG | Closing plugin on server side
	I0719 19:40:32.506985   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:32.507001   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:33.019915   68019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.037215511s)
	I0719 19:40:33.019974   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:33.019991   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:33.020393   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:33.020409   68019 main.go:141] libmachine: (no-preload-971041) DBG | Closing plugin on server side
	I0719 19:40:33.020415   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:33.020433   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:33.020442   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:33.020806   68019 main.go:141] libmachine: (no-preload-971041) DBG | Closing plugin on server side
	I0719 19:40:33.020860   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:33.020875   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:33.020893   68019 addons.go:475] Verifying addon metrics-server=true in "no-preload-971041"
	I0719 19:40:33.023369   68019 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0719 19:40:33.024402   68019 addons.go:510] duration metric: took 1.536389874s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0719 19:40:33.763711   68019 pod_ready.go:102] pod "coredns-5cfdc65f69-7kdqd" in "kube-system" namespace has status "Ready":"False"
	I0719 19:40:36.262760   68019 pod_ready.go:102] pod "coredns-5cfdc65f69-7kdqd" in "kube-system" namespace has status "Ready":"False"
	I0719 19:40:38.761973   68019 pod_ready.go:102] pod "coredns-5cfdc65f69-7kdqd" in "kube-system" namespace has status "Ready":"False"
	I0719 19:40:39.261665   68019 pod_ready.go:92] pod "coredns-5cfdc65f69-7kdqd" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:39.261686   68019 pod_ready.go:81] duration metric: took 7.506626451s for pod "coredns-5cfdc65f69-7kdqd" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.261696   68019 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-d5xr6" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.268952   68019 pod_ready.go:92] pod "coredns-5cfdc65f69-d5xr6" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:39.268968   68019 pod_ready.go:81] duration metric: took 7.267035ms for pod "coredns-5cfdc65f69-d5xr6" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.268976   68019 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.274245   68019 pod_ready.go:92] pod "etcd-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:39.274263   68019 pod_ready.go:81] duration metric: took 5.279919ms for pod "etcd-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.274273   68019 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.278452   68019 pod_ready.go:92] pod "kube-apiserver-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:39.278469   68019 pod_ready.go:81] duration metric: took 4.189107ms for pod "kube-apiserver-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.278479   68019 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:40.285769   68019 pod_ready.go:92] pod "kube-controller-manager-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:40.285797   68019 pod_ready.go:81] duration metric: took 1.007309472s for pod "kube-controller-manager-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:40.285814   68019 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-445wh" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:40.459001   68019 pod_ready.go:92] pod "kube-proxy-445wh" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:40.459024   68019 pod_ready.go:81] duration metric: took 173.203584ms for pod "kube-proxy-445wh" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:40.459033   68019 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:40.858861   68019 pod_ready.go:92] pod "kube-scheduler-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:40.858890   68019 pod_ready.go:81] duration metric: took 399.849812ms for pod "kube-scheduler-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:40.858901   68019 pod_ready.go:38] duration metric: took 9.109176659s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:40:40.858919   68019 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:40:40.858980   68019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:40:40.873763   68019 api_server.go:72] duration metric: took 9.385790935s to wait for apiserver process to appear ...
	I0719 19:40:40.873791   68019 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:40:40.873813   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:40:40.878709   68019 api_server.go:279] https://192.168.50.119:8443/healthz returned 200:
	ok
	I0719 19:40:40.879656   68019 api_server.go:141] control plane version: v1.31.0-beta.0
	I0719 19:40:40.879677   68019 api_server.go:131] duration metric: took 5.8793ms to wait for apiserver health ...
	I0719 19:40:40.879686   68019 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:40:41.062525   68019 system_pods.go:59] 9 kube-system pods found
	I0719 19:40:41.062553   68019 system_pods.go:61] "coredns-5cfdc65f69-7kdqd" [1b0eeec5-f993-4d90-a3e0-5254bcc39f64] Running
	I0719 19:40:41.062561   68019 system_pods.go:61] "coredns-5cfdc65f69-d5xr6" [5f5a96c3-84ef-429c-b31d-472fcc247862] Running
	I0719 19:40:41.062565   68019 system_pods.go:61] "etcd-no-preload-971041" [8852dc62-d441-43c0-97e5-6fb333ee3d26] Running
	I0719 19:40:41.062569   68019 system_pods.go:61] "kube-apiserver-no-preload-971041" [083b50e1-e9fa-4e4b-a684-a6d461c7b45e] Running
	I0719 19:40:41.062573   68019 system_pods.go:61] "kube-controller-manager-no-preload-971041" [45ee5801-3b8a-4072-a4c6-716525a06d87] Running
	I0719 19:40:41.062576   68019 system_pods.go:61] "kube-proxy-445wh" [468d052a-a8d2-47f6-9054-813dd3ee33ee] Running
	I0719 19:40:41.062579   68019 system_pods.go:61] "kube-scheduler-no-preload-971041" [3989f412-bfc8-41c2-a9df-ef2b72f95f37] Running
	I0719 19:40:41.062584   68019 system_pods.go:61] "metrics-server-78fcd8795b-r6hdb" [13d26205-03d4-4719-bfa0-199a2e23b52a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:40:41.062588   68019 system_pods.go:61] "storage-provisioner" [fedb9d1a-139b-4627-8efa-64dd1481d8d4] Running
	I0719 19:40:41.062595   68019 system_pods.go:74] duration metric: took 182.903201ms to wait for pod list to return data ...
	I0719 19:40:41.062601   68019 default_sa.go:34] waiting for default service account to be created ...
	I0719 19:40:41.259408   68019 default_sa.go:45] found service account: "default"
	I0719 19:40:41.259432   68019 default_sa.go:55] duration metric: took 196.824864ms for default service account to be created ...
	I0719 19:40:41.259440   68019 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 19:40:41.462209   68019 system_pods.go:86] 9 kube-system pods found
	I0719 19:40:41.462243   68019 system_pods.go:89] "coredns-5cfdc65f69-7kdqd" [1b0eeec5-f993-4d90-a3e0-5254bcc39f64] Running
	I0719 19:40:41.462251   68019 system_pods.go:89] "coredns-5cfdc65f69-d5xr6" [5f5a96c3-84ef-429c-b31d-472fcc247862] Running
	I0719 19:40:41.462258   68019 system_pods.go:89] "etcd-no-preload-971041" [8852dc62-d441-43c0-97e5-6fb333ee3d26] Running
	I0719 19:40:41.462264   68019 system_pods.go:89] "kube-apiserver-no-preload-971041" [083b50e1-e9fa-4e4b-a684-a6d461c7b45e] Running
	I0719 19:40:41.462271   68019 system_pods.go:89] "kube-controller-manager-no-preload-971041" [45ee5801-3b8a-4072-a4c6-716525a06d87] Running
	I0719 19:40:41.462276   68019 system_pods.go:89] "kube-proxy-445wh" [468d052a-a8d2-47f6-9054-813dd3ee33ee] Running
	I0719 19:40:41.462282   68019 system_pods.go:89] "kube-scheduler-no-preload-971041" [3989f412-bfc8-41c2-a9df-ef2b72f95f37] Running
	I0719 19:40:41.462291   68019 system_pods.go:89] "metrics-server-78fcd8795b-r6hdb" [13d26205-03d4-4719-bfa0-199a2e23b52a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:40:41.462295   68019 system_pods.go:89] "storage-provisioner" [fedb9d1a-139b-4627-8efa-64dd1481d8d4] Running
	I0719 19:40:41.462304   68019 system_pods.go:126] duration metric: took 202.857782ms to wait for k8s-apps to be running ...
	I0719 19:40:41.462312   68019 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 19:40:41.462377   68019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:40:41.478453   68019 system_svc.go:56] duration metric: took 16.131704ms WaitForService to wait for kubelet
	I0719 19:40:41.478487   68019 kubeadm.go:582] duration metric: took 9.990516808s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 19:40:41.478510   68019 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:40:41.660126   68019 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:40:41.660154   68019 node_conditions.go:123] node cpu capacity is 2
	I0719 19:40:41.660165   68019 node_conditions.go:105] duration metric: took 181.649686ms to run NodePressure ...
	I0719 19:40:41.660176   68019 start.go:241] waiting for startup goroutines ...
	I0719 19:40:41.660182   68019 start.go:246] waiting for cluster config update ...
	I0719 19:40:41.660192   68019 start.go:255] writing updated cluster config ...
	I0719 19:40:41.660455   68019 ssh_runner.go:195] Run: rm -f paused
	I0719 19:40:41.707917   68019 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0719 19:40:41.710102   68019 out.go:177] * Done! kubectl is now configured to use "no-preload-971041" cluster and "default" namespace by default
	I0719 19:41:08.619807   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:41:08.620131   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:41:08.620145   68995 kubeadm.go:310] 
	I0719 19:41:08.620196   68995 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0719 19:41:08.620252   68995 kubeadm.go:310] 		timed out waiting for the condition
	I0719 19:41:08.620262   68995 kubeadm.go:310] 
	I0719 19:41:08.620302   68995 kubeadm.go:310] 	This error is likely caused by:
	I0719 19:41:08.620401   68995 kubeadm.go:310] 		- The kubelet is not running
	I0719 19:41:08.620615   68995 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0719 19:41:08.620632   68995 kubeadm.go:310] 
	I0719 19:41:08.620765   68995 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0719 19:41:08.620824   68995 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0719 19:41:08.620872   68995 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0719 19:41:08.620882   68995 kubeadm.go:310] 
	I0719 19:41:08.621033   68995 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0719 19:41:08.621168   68995 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0719 19:41:08.621179   68995 kubeadm.go:310] 
	I0719 19:41:08.621349   68995 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0719 19:41:08.621469   68995 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0719 19:41:08.621576   68995 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0719 19:41:08.621664   68995 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0719 19:41:08.621677   68995 kubeadm.go:310] 
	I0719 19:41:08.622038   68995 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 19:41:08.622184   68995 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0719 19:41:08.622310   68995 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0719 19:41:08.622452   68995 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0719 19:41:08.622521   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 19:41:09.080325   68995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:41:09.095325   68995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:41:09.104753   68995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:41:09.104771   68995 kubeadm.go:157] found existing configuration files:
	
	I0719 19:41:09.104818   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:41:09.113315   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:41:09.113375   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:41:09.122025   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:41:09.130333   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:41:09.130390   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:41:09.139159   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:41:09.147867   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:41:09.147913   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:41:09.156426   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:41:09.165249   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:41:09.165299   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:41:09.174271   68995 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 19:41:09.379598   68995 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 19:43:05.534759   68995 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0719 19:43:05.534861   68995 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0719 19:43:05.536588   68995 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0719 19:43:05.536645   68995 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 19:43:05.536728   68995 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 19:43:05.536857   68995 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 19:43:05.536952   68995 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 19:43:05.537043   68995 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 19:43:05.538861   68995 out.go:204]   - Generating certificates and keys ...
	I0719 19:43:05.538966   68995 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 19:43:05.539081   68995 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 19:43:05.539168   68995 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 19:43:05.539245   68995 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 19:43:05.539349   68995 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 19:43:05.539395   68995 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 19:43:05.539453   68995 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 19:43:05.539504   68995 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 19:43:05.539578   68995 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 19:43:05.539668   68995 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 19:43:05.539716   68995 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 19:43:05.539763   68995 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 19:43:05.539812   68995 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 19:43:05.539885   68995 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 19:43:05.539968   68995 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 19:43:05.540022   68995 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 19:43:05.540135   68995 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 19:43:05.540239   68995 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 19:43:05.540291   68995 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 19:43:05.540376   68995 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 19:43:05.542005   68995 out.go:204]   - Booting up control plane ...
	I0719 19:43:05.542089   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 19:43:05.542159   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 19:43:05.542219   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 19:43:05.542290   68995 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 19:43:05.542442   68995 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 19:43:05.542510   68995 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0719 19:43:05.542610   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:43:05.542826   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:43:05.542939   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:43:05.543171   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:43:05.543252   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:43:05.543470   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:43:05.543578   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:43:05.543759   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:43:05.543891   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:43:05.544130   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:43:05.544144   68995 kubeadm.go:310] 
	I0719 19:43:05.544185   68995 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0719 19:43:05.544237   68995 kubeadm.go:310] 		timed out waiting for the condition
	I0719 19:43:05.544252   68995 kubeadm.go:310] 
	I0719 19:43:05.544286   68995 kubeadm.go:310] 	This error is likely caused by:
	I0719 19:43:05.544316   68995 kubeadm.go:310] 		- The kubelet is not running
	I0719 19:43:05.544416   68995 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0719 19:43:05.544423   68995 kubeadm.go:310] 
	I0719 19:43:05.544574   68995 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0719 19:43:05.544628   68995 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0719 19:43:05.544675   68995 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0719 19:43:05.544683   68995 kubeadm.go:310] 
	I0719 19:43:05.544816   68995 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0719 19:43:05.544917   68995 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0719 19:43:05.544926   68995 kubeadm.go:310] 
	I0719 19:43:05.545025   68995 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0719 19:43:05.545099   68995 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0719 19:43:05.545183   68995 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0719 19:43:05.545275   68995 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0719 19:43:05.545312   68995 kubeadm.go:310] 
	I0719 19:43:05.545346   68995 kubeadm.go:394] duration metric: took 8m2.050412966s to StartCluster
	I0719 19:43:05.545390   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:43:05.545442   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:43:05.585066   68995 cri.go:89] found id: ""
	I0719 19:43:05.585101   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.585114   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:43:05.585123   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:43:05.585197   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:43:05.619270   68995 cri.go:89] found id: ""
	I0719 19:43:05.619298   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.619308   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:43:05.619316   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:43:05.619378   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:43:05.652886   68995 cri.go:89] found id: ""
	I0719 19:43:05.652924   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.652935   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:43:05.652943   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:43:05.653004   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:43:05.687430   68995 cri.go:89] found id: ""
	I0719 19:43:05.687464   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.687475   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:43:05.687482   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:43:05.687542   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:43:05.724176   68995 cri.go:89] found id: ""
	I0719 19:43:05.724207   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.724218   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:43:05.724226   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:43:05.724286   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:43:05.757047   68995 cri.go:89] found id: ""
	I0719 19:43:05.757070   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.757077   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:43:05.757083   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:43:05.757131   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:43:05.799705   68995 cri.go:89] found id: ""
	I0719 19:43:05.799733   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.799743   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:43:05.799751   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:43:05.799811   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:43:05.844622   68995 cri.go:89] found id: ""
	I0719 19:43:05.844647   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.844655   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:43:05.844665   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:43:05.844676   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:43:05.962843   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:43:05.962880   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:43:06.010157   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:43:06.010191   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:43:06.060738   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:43:06.060771   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:43:06.075609   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:43:06.075650   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:43:06.155814   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0719 19:43:06.155872   68995 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0719 19:43:06.155906   68995 out.go:239] * 
	W0719 19:43:06.155980   68995 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0719 19:43:06.156013   68995 out.go:239] * 
	W0719 19:43:06.156982   68995 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 19:43:06.160611   68995 out.go:177] 
	W0719 19:43:06.161797   68995 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0719 19:43:06.161854   68995 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0719 19:43:06.161873   68995 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0719 19:43:06.163354   68995 out.go:177] 
	
	
	==> CRI-O <==
	Jul 19 19:52:11 old-k8s-version-570369 crio[650]: time="2024-07-19 19:52:11.322846110Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721418731322820232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=76348f29-e5c4-4998-8c2c-9c374969f641 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:52:11 old-k8s-version-570369 crio[650]: time="2024-07-19 19:52:11.323651718Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8ab30849-f0d8-44b5-aa1e-7a571b60a99f name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:52:11 old-k8s-version-570369 crio[650]: time="2024-07-19 19:52:11.323707403Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8ab30849-f0d8-44b5-aa1e-7a571b60a99f name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:52:11 old-k8s-version-570369 crio[650]: time="2024-07-19 19:52:11.323742819Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8ab30849-f0d8-44b5-aa1e-7a571b60a99f name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:52:11 old-k8s-version-570369 crio[650]: time="2024-07-19 19:52:11.353202148Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9c500a75-fa91-4d60-805f-59d53c013586 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:52:11 old-k8s-version-570369 crio[650]: time="2024-07-19 19:52:11.353278014Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9c500a75-fa91-4d60-805f-59d53c013586 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:52:11 old-k8s-version-570369 crio[650]: time="2024-07-19 19:52:11.354310050Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=00816fcb-b3b9-41af-b7ba-8087fb6f2a55 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:52:11 old-k8s-version-570369 crio[650]: time="2024-07-19 19:52:11.354767597Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721418731354738665,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=00816fcb-b3b9-41af-b7ba-8087fb6f2a55 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:52:11 old-k8s-version-570369 crio[650]: time="2024-07-19 19:52:11.355215929Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ad1dbd88-4fe1-410d-9e7e-c7b1d778ad7a name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:52:11 old-k8s-version-570369 crio[650]: time="2024-07-19 19:52:11.355305411Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ad1dbd88-4fe1-410d-9e7e-c7b1d778ad7a name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:52:11 old-k8s-version-570369 crio[650]: time="2024-07-19 19:52:11.355445495Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ad1dbd88-4fe1-410d-9e7e-c7b1d778ad7a name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:52:11 old-k8s-version-570369 crio[650]: time="2024-07-19 19:52:11.387122872Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8fa66501-966c-4749-afa6-ddc7b54a19b8 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:52:11 old-k8s-version-570369 crio[650]: time="2024-07-19 19:52:11.387216418Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8fa66501-966c-4749-afa6-ddc7b54a19b8 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:52:11 old-k8s-version-570369 crio[650]: time="2024-07-19 19:52:11.389301316Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d73057c7-e41f-4998-9217-a235b60d7704 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:52:11 old-k8s-version-570369 crio[650]: time="2024-07-19 19:52:11.389731758Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721418731389703109,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d73057c7-e41f-4998-9217-a235b60d7704 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:52:11 old-k8s-version-570369 crio[650]: time="2024-07-19 19:52:11.390308402Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=08d128d0-d8cd-4414-b887-ccf60bd04683 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:52:11 old-k8s-version-570369 crio[650]: time="2024-07-19 19:52:11.390425218Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=08d128d0-d8cd-4414-b887-ccf60bd04683 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:52:11 old-k8s-version-570369 crio[650]: time="2024-07-19 19:52:11.390484468Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=08d128d0-d8cd-4414-b887-ccf60bd04683 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:52:11 old-k8s-version-570369 crio[650]: time="2024-07-19 19:52:11.420562301Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=00d83fb4-2251-4185-878e-d9dcc45cf910 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:52:11 old-k8s-version-570369 crio[650]: time="2024-07-19 19:52:11.420653536Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=00d83fb4-2251-4185-878e-d9dcc45cf910 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:52:11 old-k8s-version-570369 crio[650]: time="2024-07-19 19:52:11.421669985Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=99349b0e-729c-4aa4-a652-25375fecec0a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:52:11 old-k8s-version-570369 crio[650]: time="2024-07-19 19:52:11.422048940Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721418731422019774,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=99349b0e-729c-4aa4-a652-25375fecec0a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:52:11 old-k8s-version-570369 crio[650]: time="2024-07-19 19:52:11.422590581Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ddef3634-0048-4acf-9b35-7468879aad8d name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:52:11 old-k8s-version-570369 crio[650]: time="2024-07-19 19:52:11.422642150Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ddef3634-0048-4acf-9b35-7468879aad8d name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:52:11 old-k8s-version-570369 crio[650]: time="2024-07-19 19:52:11.422681069Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ddef3634-0048-4acf-9b35-7468879aad8d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul19 19:34] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051232] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041151] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.651686] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.772958] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.529588] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.839428] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.060470] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070691] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.149713] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.157817] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.254351] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[Jul19 19:35] systemd-fstab-generator[833]: Ignoring "noauto" option for root device
	[  +0.082478] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.954850] systemd-fstab-generator[956]: Ignoring "noauto" option for root device
	[ +10.480938] kauditd_printk_skb: 46 callbacks suppressed
	[Jul19 19:39] systemd-fstab-generator[5050]: Ignoring "noauto" option for root device
	[Jul19 19:41] systemd-fstab-generator[5329]: Ignoring "noauto" option for root device
	[  +0.064414] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:52:11 up 17 min,  0 users,  load average: 0.00, 0.02, 0.04
	Linux old-k8s-version-570369 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 19 19:52:06 old-k8s-version-570369 kubelet[6501]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Jul 19 19:52:06 old-k8s-version-570369 kubelet[6501]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000101320, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000d46180, 0x24, 0x0, ...)
	Jul 19 19:52:06 old-k8s-version-570369 kubelet[6501]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Jul 19 19:52:06 old-k8s-version-570369 kubelet[6501]: net.(*Dialer).DialContext(0xc000bd0120, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000d46180, 0x24, 0x0, 0x0, 0x0, ...)
	Jul 19 19:52:06 old-k8s-version-570369 kubelet[6501]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Jul 19 19:52:06 old-k8s-version-570369 kubelet[6501]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000bfd0c0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000d46180, 0x24, 0x60, 0x7fbaf44f8768, 0x118, ...)
	Jul 19 19:52:06 old-k8s-version-570369 kubelet[6501]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Jul 19 19:52:06 old-k8s-version-570369 kubelet[6501]: net/http.(*Transport).dial(0xc0007ec000, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000d46180, 0x24, 0x0, 0x0, 0x0, ...)
	Jul 19 19:52:06 old-k8s-version-570369 kubelet[6501]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Jul 19 19:52:06 old-k8s-version-570369 kubelet[6501]: net/http.(*Transport).dialConn(0xc0007ec000, 0x4f7fe00, 0xc000052030, 0x0, 0xc000338480, 0x5, 0xc000d46180, 0x24, 0x0, 0xc000d44120, ...)
	Jul 19 19:52:06 old-k8s-version-570369 kubelet[6501]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Jul 19 19:52:06 old-k8s-version-570369 kubelet[6501]: net/http.(*Transport).dialConnFor(0xc0007ec000, 0xc0009de630)
	Jul 19 19:52:06 old-k8s-version-570369 kubelet[6501]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Jul 19 19:52:06 old-k8s-version-570369 kubelet[6501]: created by net/http.(*Transport).queueForDial
	Jul 19 19:52:06 old-k8s-version-570369 kubelet[6501]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Jul 19 19:52:06 old-k8s-version-570369 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 19 19:52:06 old-k8s-version-570369 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 19 19:52:06 old-k8s-version-570369 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Jul 19 19:52:06 old-k8s-version-570369 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 19 19:52:06 old-k8s-version-570369 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 19 19:52:06 old-k8s-version-570369 kubelet[6510]: I0719 19:52:06.851296    6510 server.go:416] Version: v1.20.0
	Jul 19 19:52:06 old-k8s-version-570369 kubelet[6510]: I0719 19:52:06.851688    6510 server.go:837] Client rotation is on, will bootstrap in background
	Jul 19 19:52:06 old-k8s-version-570369 kubelet[6510]: I0719 19:52:06.854596    6510 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 19 19:52:06 old-k8s-version-570369 kubelet[6510]: W0719 19:52:06.855915    6510 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 19 19:52:06 old-k8s-version-570369 kubelet[6510]: I0719 19:52:06.856292    6510 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-570369 -n old-k8s-version-570369
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-570369 -n old-k8s-version-570369: exit status 2 (222.489423ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-570369" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (432.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-889918 -n embed-certs-889918
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-19 19:55:27.824673501 +0000 UTC m=+6102.088436481
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-889918 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-889918 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.928µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-889918 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-889918 -n embed-certs-889918
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-889918 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-889918 logs -n 25: (1.190032904s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p pause-666146                                        | pause-666146                 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-666146                                        | pause-666146                 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-666146                                        | pause-666146                 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	| delete  | -p                                                     | disable-driver-mounts-994300 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | disable-driver-mounts-994300                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:27 UTC |
	|         | default-k8s-diff-port-578533                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-971041             | no-preload-971041            | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-971041                                   | no-preload-971041            | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-578533  | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:27 UTC | 19 Jul 24 19:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:27 UTC |                     |
	|         | default-k8s-diff-port-578533                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-889918            | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:27 UTC | 19 Jul 24 19:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-889918                                  | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-971041                  | no-preload-971041            | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-971041 --memory=2200                     | no-preload-971041            | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC | 19 Jul 24 19:40 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-570369        | old-k8s-version-570369       | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-578533       | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-889918                 | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:30 UTC | 19 Jul 24 19:40 UTC |
	|         | default-k8s-diff-port-578533                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-889918                                  | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:30 UTC | 19 Jul 24 19:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-570369                              | old-k8s-version-570369       | jenkins | v1.33.1 | 19 Jul 24 19:30 UTC | 19 Jul 24 19:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-570369             | old-k8s-version-570369       | jenkins | v1.33.1 | 19 Jul 24 19:31 UTC | 19 Jul 24 19:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-570369                              | old-k8s-version-570369       | jenkins | v1.33.1 | 19 Jul 24 19:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-570369                              | old-k8s-version-570369       | jenkins | v1.33.1 | 19 Jul 24 19:54 UTC | 19 Jul 24 19:54 UTC |
	| start   | -p newest-cni-229943 --memory=2200 --alsologtostderr   | newest-cni-229943            | jenkins | v1.33.1 | 19 Jul 24 19:54 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| delete  | -p no-preload-971041                                   | no-preload-971041            | jenkins | v1.33.1 | 19 Jul 24 19:54 UTC | 19 Jul 24 19:54 UTC |
	| start   | -p auto-179125 --memory=3072                           | auto-179125                  | jenkins | v1.33.1 | 19 Jul 24 19:54 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 19:54:59
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 19:54:59.408807   75617 out.go:291] Setting OutFile to fd 1 ...
	I0719 19:54:59.408940   75617 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:54:59.408951   75617 out.go:304] Setting ErrFile to fd 2...
	I0719 19:54:59.408956   75617 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:54:59.409156   75617 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 19:54:59.409808   75617 out.go:298] Setting JSON to false
	I0719 19:54:59.410845   75617 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":9442,"bootTime":1721409457,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 19:54:59.410903   75617 start.go:139] virtualization: kvm guest
	I0719 19:54:59.413058   75617 out.go:177] * [auto-179125] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 19:54:59.414483   75617 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 19:54:59.414494   75617 notify.go:220] Checking for updates...
	I0719 19:54:59.417148   75617 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 19:54:59.418457   75617 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:54:59.419979   75617 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 19:54:59.421241   75617 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 19:54:59.422486   75617 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 19:54:59.424090   75617 config.go:182] Loaded profile config "default-k8s-diff-port-578533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:54:59.424179   75617 config.go:182] Loaded profile config "embed-certs-889918": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:54:59.424268   75617 config.go:182] Loaded profile config "newest-cni-229943": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 19:54:59.424350   75617 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 19:54:59.464344   75617 out.go:177] * Using the kvm2 driver based on user configuration
	I0719 19:54:59.465366   75617 start.go:297] selected driver: kvm2
	I0719 19:54:59.465380   75617 start.go:901] validating driver "kvm2" against <nil>
	I0719 19:54:59.465391   75617 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 19:54:59.466058   75617 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 19:54:59.466139   75617 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19307-14841/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 19:54:59.483342   75617 install.go:137] /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0719 19:54:59.483420   75617 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 19:54:59.483686   75617 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 19:54:59.483729   75617 cni.go:84] Creating CNI manager for ""
	I0719 19:54:59.483741   75617 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:54:59.483759   75617 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 19:54:59.483894   75617 start.go:340] cluster config:
	{Name:auto-179125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-179125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:54:59.484018   75617 iso.go:125] acquiring lock: {Name:mka126a3710ab8a115eb2a3299b735524430e5f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 19:54:59.485966   75617 out.go:177] * Starting "auto-179125" primary control-plane node in "auto-179125" cluster
	I0719 19:54:56.993889   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:54:56.994347   75258 main.go:141] libmachine: (newest-cni-229943) DBG | unable to find current IP address of domain newest-cni-229943 in network mk-newest-cni-229943
	I0719 19:54:56.994373   75258 main.go:141] libmachine: (newest-cni-229943) DBG | I0719 19:54:56.994301   75281 retry.go:31] will retry after 1.206416664s: waiting for machine to come up
	I0719 19:54:58.202552   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:54:58.203270   75258 main.go:141] libmachine: (newest-cni-229943) DBG | unable to find current IP address of domain newest-cni-229943 in network mk-newest-cni-229943
	I0719 19:54:58.203290   75258 main.go:141] libmachine: (newest-cni-229943) DBG | I0719 19:54:58.203225   75281 retry.go:31] will retry after 1.739040442s: waiting for machine to come up
	I0719 19:54:59.945100   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:54:59.945568   75258 main.go:141] libmachine: (newest-cni-229943) DBG | unable to find current IP address of domain newest-cni-229943 in network mk-newest-cni-229943
	I0719 19:54:59.945592   75258 main.go:141] libmachine: (newest-cni-229943) DBG | I0719 19:54:59.945529   75281 retry.go:31] will retry after 1.987500402s: waiting for machine to come up
	I0719 19:54:59.487030   75617 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 19:54:59.487075   75617 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 19:54:59.487086   75617 cache.go:56] Caching tarball of preloaded images
	I0719 19:54:59.487178   75617 preload.go:172] Found /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 19:54:59.487192   75617 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 19:54:59.487286   75617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/auto-179125/config.json ...
	I0719 19:54:59.487309   75617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/auto-179125/config.json: {Name:mkf9c92b27652bc965925e1d6130fb388e050128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:54:59.487467   75617 start.go:360] acquireMachinesLock for auto-179125: {Name:mk45aeee801587237bb965fa260501e9fac6f233 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 19:55:01.934459   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:55:01.934970   75258 main.go:141] libmachine: (newest-cni-229943) DBG | unable to find current IP address of domain newest-cni-229943 in network mk-newest-cni-229943
	I0719 19:55:01.934999   75258 main.go:141] libmachine: (newest-cni-229943) DBG | I0719 19:55:01.934926   75281 retry.go:31] will retry after 2.046720121s: waiting for machine to come up
	I0719 19:55:03.984072   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:55:03.984585   75258 main.go:141] libmachine: (newest-cni-229943) DBG | unable to find current IP address of domain newest-cni-229943 in network mk-newest-cni-229943
	I0719 19:55:03.984610   75258 main.go:141] libmachine: (newest-cni-229943) DBG | I0719 19:55:03.984530   75281 retry.go:31] will retry after 3.352424561s: waiting for machine to come up
	I0719 19:55:07.338335   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:55:07.338856   75258 main.go:141] libmachine: (newest-cni-229943) DBG | unable to find current IP address of domain newest-cni-229943 in network mk-newest-cni-229943
	I0719 19:55:07.338883   75258 main.go:141] libmachine: (newest-cni-229943) DBG | I0719 19:55:07.338787   75281 retry.go:31] will retry after 4.276239825s: waiting for machine to come up
	I0719 19:55:13.108463   75617 start.go:364] duration metric: took 13.620960527s to acquireMachinesLock for "auto-179125"
	I0719 19:55:13.108527   75617 start.go:93] Provisioning new machine with config: &{Name:auto-179125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.3 ClusterName:auto-179125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 19:55:13.108661   75617 start.go:125] createHost starting for "" (driver="kvm2")
	I0719 19:55:13.110940   75617 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 19:55:13.111111   75617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:55:13.111164   75617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:55:13.130332   75617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36637
	I0719 19:55:13.130697   75617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:55:13.131258   75617 main.go:141] libmachine: Using API Version  1
	I0719 19:55:13.131283   75617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:55:13.131629   75617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:55:13.131818   75617 main.go:141] libmachine: (auto-179125) Calling .GetMachineName
	I0719 19:55:13.132012   75617 main.go:141] libmachine: (auto-179125) Calling .DriverName
	I0719 19:55:13.132178   75617 start.go:159] libmachine.API.Create for "auto-179125" (driver="kvm2")
	I0719 19:55:13.132223   75617 client.go:168] LocalClient.Create starting
	I0719 19:55:13.132259   75617 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem
	I0719 19:55:13.132321   75617 main.go:141] libmachine: Decoding PEM data...
	I0719 19:55:13.132347   75617 main.go:141] libmachine: Parsing certificate...
	I0719 19:55:13.132414   75617 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem
	I0719 19:55:13.132445   75617 main.go:141] libmachine: Decoding PEM data...
	I0719 19:55:13.132469   75617 main.go:141] libmachine: Parsing certificate...
	I0719 19:55:13.132514   75617 main.go:141] libmachine: Running pre-create checks...
	I0719 19:55:13.132528   75617 main.go:141] libmachine: (auto-179125) Calling .PreCreateCheck
	I0719 19:55:13.132959   75617 main.go:141] libmachine: (auto-179125) Calling .GetConfigRaw
	I0719 19:55:13.133510   75617 main.go:141] libmachine: Creating machine...
	I0719 19:55:13.133524   75617 main.go:141] libmachine: (auto-179125) Calling .Create
	I0719 19:55:13.133690   75617 main.go:141] libmachine: (auto-179125) Creating KVM machine...
	I0719 19:55:13.134942   75617 main.go:141] libmachine: (auto-179125) DBG | found existing default KVM network
	I0719 19:55:13.136765   75617 main.go:141] libmachine: (auto-179125) DBG | I0719 19:55:13.136592   75727 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:60:6b:93} reservation:<nil>}
	I0719 19:55:13.138443   75617 main.go:141] libmachine: (auto-179125) DBG | I0719 19:55:13.138360   75727 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001135e0}
	I0719 19:55:13.138494   75617 main.go:141] libmachine: (auto-179125) DBG | created network xml: 
	I0719 19:55:13.138517   75617 main.go:141] libmachine: (auto-179125) DBG | <network>
	I0719 19:55:13.138531   75617 main.go:141] libmachine: (auto-179125) DBG |   <name>mk-auto-179125</name>
	I0719 19:55:13.138542   75617 main.go:141] libmachine: (auto-179125) DBG |   <dns enable='no'/>
	I0719 19:55:13.138552   75617 main.go:141] libmachine: (auto-179125) DBG |   
	I0719 19:55:13.138571   75617 main.go:141] libmachine: (auto-179125) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0719 19:55:13.138584   75617 main.go:141] libmachine: (auto-179125) DBG |     <dhcp>
	I0719 19:55:13.138595   75617 main.go:141] libmachine: (auto-179125) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0719 19:55:13.138623   75617 main.go:141] libmachine: (auto-179125) DBG |     </dhcp>
	I0719 19:55:13.138652   75617 main.go:141] libmachine: (auto-179125) DBG |   </ip>
	I0719 19:55:13.138675   75617 main.go:141] libmachine: (auto-179125) DBG |   
	I0719 19:55:13.138696   75617 main.go:141] libmachine: (auto-179125) DBG | </network>
	I0719 19:55:13.138711   75617 main.go:141] libmachine: (auto-179125) DBG | 
	I0719 19:55:13.144193   75617 main.go:141] libmachine: (auto-179125) DBG | trying to create private KVM network mk-auto-179125 192.168.50.0/24...
	I0719 19:55:13.215703   75617 main.go:141] libmachine: (auto-179125) DBG | private KVM network mk-auto-179125 192.168.50.0/24 created
	I0719 19:55:13.215747   75617 main.go:141] libmachine: (auto-179125) DBG | I0719 19:55:13.215656   75727 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 19:55:13.215764   75617 main.go:141] libmachine: (auto-179125) Setting up store path in /home/jenkins/minikube-integration/19307-14841/.minikube/machines/auto-179125 ...
	I0719 19:55:13.215787   75617 main.go:141] libmachine: (auto-179125) Building disk image from file:///home/jenkins/minikube-integration/19307-14841/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 19:55:13.215813   75617 main.go:141] libmachine: (auto-179125) Downloading /home/jenkins/minikube-integration/19307-14841/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19307-14841/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 19:55:13.459548   75617 main.go:141] libmachine: (auto-179125) DBG | I0719 19:55:13.459430   75727 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/auto-179125/id_rsa...
	I0719 19:55:13.527725   75617 main.go:141] libmachine: (auto-179125) DBG | I0719 19:55:13.527523   75727 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/auto-179125/auto-179125.rawdisk...
	I0719 19:55:13.527764   75617 main.go:141] libmachine: (auto-179125) DBG | Writing magic tar header
	I0719 19:55:13.527780   75617 main.go:141] libmachine: (auto-179125) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841/.minikube/machines/auto-179125 (perms=drwx------)
	I0719 19:55:13.527790   75617 main.go:141] libmachine: (auto-179125) DBG | Writing SSH key tar header
	I0719 19:55:13.527811   75617 main.go:141] libmachine: (auto-179125) DBG | I0719 19:55:13.527632   75727 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19307-14841/.minikube/machines/auto-179125 ...
	I0719 19:55:13.527823   75617 main.go:141] libmachine: (auto-179125) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/auto-179125
	I0719 19:55:13.527853   75617 main.go:141] libmachine: (auto-179125) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841/.minikube/machines
	I0719 19:55:13.527877   75617 main.go:141] libmachine: (auto-179125) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841/.minikube/machines (perms=drwxr-xr-x)
	I0719 19:55:13.527902   75617 main.go:141] libmachine: (auto-179125) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 19:55:13.527922   75617 main.go:141] libmachine: (auto-179125) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841
	I0719 19:55:13.527933   75617 main.go:141] libmachine: (auto-179125) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0719 19:55:13.527947   75617 main.go:141] libmachine: (auto-179125) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841/.minikube (perms=drwxr-xr-x)
	I0719 19:55:13.527960   75617 main.go:141] libmachine: (auto-179125) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841 (perms=drwxrwxr-x)
	I0719 19:55:13.527968   75617 main.go:141] libmachine: (auto-179125) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0719 19:55:13.527978   75617 main.go:141] libmachine: (auto-179125) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0719 19:55:13.527988   75617 main.go:141] libmachine: (auto-179125) Creating domain...
	I0719 19:55:13.527997   75617 main.go:141] libmachine: (auto-179125) DBG | Checking permissions on dir: /home/jenkins
	I0719 19:55:13.528010   75617 main.go:141] libmachine: (auto-179125) DBG | Checking permissions on dir: /home
	I0719 19:55:13.528021   75617 main.go:141] libmachine: (auto-179125) DBG | Skipping /home - not owner
	I0719 19:55:13.529191   75617 main.go:141] libmachine: (auto-179125) define libvirt domain using xml: 
	I0719 19:55:13.529220   75617 main.go:141] libmachine: (auto-179125) <domain type='kvm'>
	I0719 19:55:13.529232   75617 main.go:141] libmachine: (auto-179125)   <name>auto-179125</name>
	I0719 19:55:13.529240   75617 main.go:141] libmachine: (auto-179125)   <memory unit='MiB'>3072</memory>
	I0719 19:55:13.529254   75617 main.go:141] libmachine: (auto-179125)   <vcpu>2</vcpu>
	I0719 19:55:13.529274   75617 main.go:141] libmachine: (auto-179125)   <features>
	I0719 19:55:13.529285   75617 main.go:141] libmachine: (auto-179125)     <acpi/>
	I0719 19:55:13.529293   75617 main.go:141] libmachine: (auto-179125)     <apic/>
	I0719 19:55:13.529302   75617 main.go:141] libmachine: (auto-179125)     <pae/>
	I0719 19:55:13.529312   75617 main.go:141] libmachine: (auto-179125)     
	I0719 19:55:13.529321   75617 main.go:141] libmachine: (auto-179125)   </features>
	I0719 19:55:13.529330   75617 main.go:141] libmachine: (auto-179125)   <cpu mode='host-passthrough'>
	I0719 19:55:13.529341   75617 main.go:141] libmachine: (auto-179125)   
	I0719 19:55:13.529348   75617 main.go:141] libmachine: (auto-179125)   </cpu>
	I0719 19:55:13.529358   75617 main.go:141] libmachine: (auto-179125)   <os>
	I0719 19:55:13.529366   75617 main.go:141] libmachine: (auto-179125)     <type>hvm</type>
	I0719 19:55:13.529377   75617 main.go:141] libmachine: (auto-179125)     <boot dev='cdrom'/>
	I0719 19:55:13.529387   75617 main.go:141] libmachine: (auto-179125)     <boot dev='hd'/>
	I0719 19:55:13.529400   75617 main.go:141] libmachine: (auto-179125)     <bootmenu enable='no'/>
	I0719 19:55:13.529408   75617 main.go:141] libmachine: (auto-179125)   </os>
	I0719 19:55:13.529417   75617 main.go:141] libmachine: (auto-179125)   <devices>
	I0719 19:55:13.529425   75617 main.go:141] libmachine: (auto-179125)     <disk type='file' device='cdrom'>
	I0719 19:55:13.529440   75617 main.go:141] libmachine: (auto-179125)       <source file='/home/jenkins/minikube-integration/19307-14841/.minikube/machines/auto-179125/boot2docker.iso'/>
	I0719 19:55:13.529452   75617 main.go:141] libmachine: (auto-179125)       <target dev='hdc' bus='scsi'/>
	I0719 19:55:13.529461   75617 main.go:141] libmachine: (auto-179125)       <readonly/>
	I0719 19:55:13.529472   75617 main.go:141] libmachine: (auto-179125)     </disk>
	I0719 19:55:13.529485   75617 main.go:141] libmachine: (auto-179125)     <disk type='file' device='disk'>
	I0719 19:55:13.529496   75617 main.go:141] libmachine: (auto-179125)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0719 19:55:13.529512   75617 main.go:141] libmachine: (auto-179125)       <source file='/home/jenkins/minikube-integration/19307-14841/.minikube/machines/auto-179125/auto-179125.rawdisk'/>
	I0719 19:55:13.529522   75617 main.go:141] libmachine: (auto-179125)       <target dev='hda' bus='virtio'/>
	I0719 19:55:13.529533   75617 main.go:141] libmachine: (auto-179125)     </disk>
	I0719 19:55:13.529541   75617 main.go:141] libmachine: (auto-179125)     <interface type='network'>
	I0719 19:55:13.529582   75617 main.go:141] libmachine: (auto-179125)       <source network='mk-auto-179125'/>
	I0719 19:55:13.529610   75617 main.go:141] libmachine: (auto-179125)       <model type='virtio'/>
	I0719 19:55:13.529622   75617 main.go:141] libmachine: (auto-179125)     </interface>
	I0719 19:55:13.529639   75617 main.go:141] libmachine: (auto-179125)     <interface type='network'>
	I0719 19:55:13.529658   75617 main.go:141] libmachine: (auto-179125)       <source network='default'/>
	I0719 19:55:13.529669   75617 main.go:141] libmachine: (auto-179125)       <model type='virtio'/>
	I0719 19:55:13.529677   75617 main.go:141] libmachine: (auto-179125)     </interface>
	I0719 19:55:13.529687   75617 main.go:141] libmachine: (auto-179125)     <serial type='pty'>
	I0719 19:55:13.529699   75617 main.go:141] libmachine: (auto-179125)       <target port='0'/>
	I0719 19:55:13.529709   75617 main.go:141] libmachine: (auto-179125)     </serial>
	I0719 19:55:13.529739   75617 main.go:141] libmachine: (auto-179125)     <console type='pty'>
	I0719 19:55:13.529766   75617 main.go:141] libmachine: (auto-179125)       <target type='serial' port='0'/>
	I0719 19:55:13.529781   75617 main.go:141] libmachine: (auto-179125)     </console>
	I0719 19:55:13.529793   75617 main.go:141] libmachine: (auto-179125)     <rng model='virtio'>
	I0719 19:55:13.529809   75617 main.go:141] libmachine: (auto-179125)       <backend model='random'>/dev/random</backend>
	I0719 19:55:13.529822   75617 main.go:141] libmachine: (auto-179125)     </rng>
	I0719 19:55:13.529849   75617 main.go:141] libmachine: (auto-179125)     
	I0719 19:55:13.529861   75617 main.go:141] libmachine: (auto-179125)     
	I0719 19:55:13.529867   75617 main.go:141] libmachine: (auto-179125)   </devices>
	I0719 19:55:13.529875   75617 main.go:141] libmachine: (auto-179125) </domain>
	I0719 19:55:13.529884   75617 main.go:141] libmachine: (auto-179125) 
	I0719 19:55:13.534450   75617 main.go:141] libmachine: (auto-179125) DBG | domain auto-179125 has defined MAC address 52:54:00:50:17:71 in network default
	I0719 19:55:13.535239   75617 main.go:141] libmachine: (auto-179125) Ensuring networks are active...
	I0719 19:55:13.535268   75617 main.go:141] libmachine: (auto-179125) DBG | domain auto-179125 has defined MAC address 52:54:00:c8:d9:03 in network mk-auto-179125
	I0719 19:55:13.536142   75617 main.go:141] libmachine: (auto-179125) Ensuring network default is active
	I0719 19:55:13.536564   75617 main.go:141] libmachine: (auto-179125) Ensuring network mk-auto-179125 is active
	I0719 19:55:13.537221   75617 main.go:141] libmachine: (auto-179125) Getting domain xml...
	I0719 19:55:13.538168   75617 main.go:141] libmachine: (auto-179125) Creating domain...
	I0719 19:55:11.616327   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:55:11.616849   75258 main.go:141] libmachine: (newest-cni-229943) Found IP for machine: 192.168.39.88
	I0719 19:55:11.616875   75258 main.go:141] libmachine: (newest-cni-229943) Reserving static IP address...
	I0719 19:55:11.616888   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has current primary IP address 192.168.39.88 and MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:55:11.617328   75258 main.go:141] libmachine: (newest-cni-229943) DBG | unable to find host DHCP lease matching {name: "newest-cni-229943", mac: "52:54:00:e4:49:e9", ip: "192.168.39.88"} in network mk-newest-cni-229943
	I0719 19:55:11.693437   75258 main.go:141] libmachine: (newest-cni-229943) DBG | Getting to WaitForSSH function...
	I0719 19:55:11.693470   75258 main.go:141] libmachine: (newest-cni-229943) Reserved static IP address: 192.168.39.88
	I0719 19:55:11.693485   75258 main.go:141] libmachine: (newest-cni-229943) Waiting for SSH to be available...
	I0719 19:55:11.696197   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:55:11.696508   75258 main.go:141] libmachine: (newest-cni-229943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:49:e9", ip: ""} in network mk-newest-cni-229943: {Iface:virbr2 ExpiryTime:2024-07-19 20:55:04 +0000 UTC Type:0 Mac:52:54:00:e4:49:e9 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e4:49:e9}
	I0719 19:55:11.696538   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined IP address 192.168.39.88 and MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:55:11.696738   75258 main.go:141] libmachine: (newest-cni-229943) DBG | Using SSH client type: external
	I0719 19:55:11.696764   75258 main.go:141] libmachine: (newest-cni-229943) DBG | Using SSH private key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/newest-cni-229943/id_rsa (-rw-------)
	I0719 19:55:11.696817   75258 main.go:141] libmachine: (newest-cni-229943) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.88 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19307-14841/.minikube/machines/newest-cni-229943/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 19:55:11.696838   75258 main.go:141] libmachine: (newest-cni-229943) DBG | About to run SSH command:
	I0719 19:55:11.696870   75258 main.go:141] libmachine: (newest-cni-229943) DBG | exit 0
	I0719 19:55:11.815966   75258 main.go:141] libmachine: (newest-cni-229943) DBG | SSH cmd err, output: <nil>: 
	I0719 19:55:11.816240   75258 main.go:141] libmachine: (newest-cni-229943) KVM machine creation complete!
	I0719 19:55:11.816564   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetConfigRaw
	I0719 19:55:11.817080   75258 main.go:141] libmachine: (newest-cni-229943) Calling .DriverName
	I0719 19:55:11.817266   75258 main.go:141] libmachine: (newest-cni-229943) Calling .DriverName
	I0719 19:55:11.817483   75258 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0719 19:55:11.817493   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetState
	I0719 19:55:11.818838   75258 main.go:141] libmachine: Detecting operating system of created instance...
	I0719 19:55:11.818850   75258 main.go:141] libmachine: Waiting for SSH to be available...
	I0719 19:55:11.818856   75258 main.go:141] libmachine: Getting to WaitForSSH function...
	I0719 19:55:11.818861   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHHostname
	I0719 19:55:11.821149   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:55:11.821538   75258 main.go:141] libmachine: (newest-cni-229943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:49:e9", ip: ""} in network mk-newest-cni-229943: {Iface:virbr2 ExpiryTime:2024-07-19 20:55:04 +0000 UTC Type:0 Mac:52:54:00:e4:49:e9 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:newest-cni-229943 Clientid:01:52:54:00:e4:49:e9}
	I0719 19:55:11.821562   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined IP address 192.168.39.88 and MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:55:11.821696   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHPort
	I0719 19:55:11.821899   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHKeyPath
	I0719 19:55:11.822100   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHKeyPath
	I0719 19:55:11.822235   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHUsername
	I0719 19:55:11.822417   75258 main.go:141] libmachine: Using SSH client type: native
	I0719 19:55:11.822632   75258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I0719 19:55:11.822644   75258 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0719 19:55:11.923403   75258 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:55:11.923435   75258 main.go:141] libmachine: Detecting the provisioner...
	I0719 19:55:11.923445   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHHostname
	I0719 19:55:11.926389   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:55:11.926722   75258 main.go:141] libmachine: (newest-cni-229943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:49:e9", ip: ""} in network mk-newest-cni-229943: {Iface:virbr2 ExpiryTime:2024-07-19 20:55:04 +0000 UTC Type:0 Mac:52:54:00:e4:49:e9 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:newest-cni-229943 Clientid:01:52:54:00:e4:49:e9}
	I0719 19:55:11.926750   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined IP address 192.168.39.88 and MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:55:11.926962   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHPort
	I0719 19:55:11.927190   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHKeyPath
	I0719 19:55:11.927366   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHKeyPath
	I0719 19:55:11.927524   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHUsername
	I0719 19:55:11.927729   75258 main.go:141] libmachine: Using SSH client type: native
	I0719 19:55:11.927948   75258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I0719 19:55:11.927963   75258 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0719 19:55:12.024706   75258 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0719 19:55:12.024812   75258 main.go:141] libmachine: found compatible host: buildroot
	I0719 19:55:12.024827   75258 main.go:141] libmachine: Provisioning with buildroot...
	I0719 19:55:12.024837   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetMachineName
	I0719 19:55:12.025106   75258 buildroot.go:166] provisioning hostname "newest-cni-229943"
	I0719 19:55:12.025131   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetMachineName
	I0719 19:55:12.025353   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHHostname
	I0719 19:55:12.028250   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:55:12.028628   75258 main.go:141] libmachine: (newest-cni-229943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:49:e9", ip: ""} in network mk-newest-cni-229943: {Iface:virbr2 ExpiryTime:2024-07-19 20:55:04 +0000 UTC Type:0 Mac:52:54:00:e4:49:e9 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:newest-cni-229943 Clientid:01:52:54:00:e4:49:e9}
	I0719 19:55:12.028650   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined IP address 192.168.39.88 and MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:55:12.028838   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHPort
	I0719 19:55:12.029026   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHKeyPath
	I0719 19:55:12.029205   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHKeyPath
	I0719 19:55:12.029361   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHUsername
	I0719 19:55:12.029528   75258 main.go:141] libmachine: Using SSH client type: native
	I0719 19:55:12.029716   75258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I0719 19:55:12.029737   75258 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-229943 && echo "newest-cni-229943" | sudo tee /etc/hostname
	I0719 19:55:12.141638   75258 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-229943
	
	I0719 19:55:12.141670   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHHostname
	I0719 19:55:12.144451   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:55:12.144897   75258 main.go:141] libmachine: (newest-cni-229943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:49:e9", ip: ""} in network mk-newest-cni-229943: {Iface:virbr2 ExpiryTime:2024-07-19 20:55:04 +0000 UTC Type:0 Mac:52:54:00:e4:49:e9 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:newest-cni-229943 Clientid:01:52:54:00:e4:49:e9}
	I0719 19:55:12.144928   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined IP address 192.168.39.88 and MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:55:12.145119   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHPort
	I0719 19:55:12.145306   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHKeyPath
	I0719 19:55:12.145492   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHKeyPath
	I0719 19:55:12.145648   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHUsername
	I0719 19:55:12.145805   75258 main.go:141] libmachine: Using SSH client type: native
	I0719 19:55:12.145991   75258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I0719 19:55:12.146009   75258 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-229943' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-229943/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-229943' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 19:55:12.258910   75258 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:55:12.258936   75258 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 19:55:12.258954   75258 buildroot.go:174] setting up certificates
	I0719 19:55:12.258963   75258 provision.go:84] configureAuth start
	I0719 19:55:12.258974   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetMachineName
	I0719 19:55:12.259287   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetIP
	I0719 19:55:12.262035   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:55:12.262342   75258 main.go:141] libmachine: (newest-cni-229943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:49:e9", ip: ""} in network mk-newest-cni-229943: {Iface:virbr2 ExpiryTime:2024-07-19 20:55:04 +0000 UTC Type:0 Mac:52:54:00:e4:49:e9 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:newest-cni-229943 Clientid:01:52:54:00:e4:49:e9}
	I0719 19:55:12.262375   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined IP address 192.168.39.88 and MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:55:12.262567   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHHostname
	I0719 19:55:12.264800   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:55:12.265156   75258 main.go:141] libmachine: (newest-cni-229943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:49:e9", ip: ""} in network mk-newest-cni-229943: {Iface:virbr2 ExpiryTime:2024-07-19 20:55:04 +0000 UTC Type:0 Mac:52:54:00:e4:49:e9 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:newest-cni-229943 Clientid:01:52:54:00:e4:49:e9}
	I0719 19:55:12.265184   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined IP address 192.168.39.88 and MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:55:12.265330   75258 provision.go:143] copyHostCerts
	I0719 19:55:12.265396   75258 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 19:55:12.265416   75258 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 19:55:12.265497   75258 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 19:55:12.265598   75258 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 19:55:12.265607   75258 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 19:55:12.265633   75258 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 19:55:12.265688   75258 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 19:55:12.265700   75258 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 19:55:12.265721   75258 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 19:55:12.265763   75258 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.newest-cni-229943 san=[127.0.0.1 192.168.39.88 localhost minikube newest-cni-229943]
	I0719 19:55:12.476525   75258 provision.go:177] copyRemoteCerts
	I0719 19:55:12.476584   75258 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 19:55:12.476606   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHHostname
	I0719 19:55:12.479344   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:55:12.479674   75258 main.go:141] libmachine: (newest-cni-229943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:49:e9", ip: ""} in network mk-newest-cni-229943: {Iface:virbr2 ExpiryTime:2024-07-19 20:55:04 +0000 UTC Type:0 Mac:52:54:00:e4:49:e9 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:newest-cni-229943 Clientid:01:52:54:00:e4:49:e9}
	I0719 19:55:12.479711   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined IP address 192.168.39.88 and MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:55:12.479861   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHPort
	I0719 19:55:12.480073   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHKeyPath
	I0719 19:55:12.480237   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHUsername
	I0719 19:55:12.480370   75258 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/newest-cni-229943/id_rsa Username:docker}
	I0719 19:55:12.557265   75258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 19:55:12.580151   75258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0719 19:55:12.601963   75258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 19:55:12.624173   75258 provision.go:87] duration metric: took 365.197501ms to configureAuth
	I0719 19:55:12.624200   75258 buildroot.go:189] setting minikube options for container-runtime
	I0719 19:55:12.624393   75258 config.go:182] Loaded profile config "newest-cni-229943": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 19:55:12.624507   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHHostname
	I0719 19:55:12.627395   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:55:12.627738   75258 main.go:141] libmachine: (newest-cni-229943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:49:e9", ip: ""} in network mk-newest-cni-229943: {Iface:virbr2 ExpiryTime:2024-07-19 20:55:04 +0000 UTC Type:0 Mac:52:54:00:e4:49:e9 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:newest-cni-229943 Clientid:01:52:54:00:e4:49:e9}
	I0719 19:55:12.627767   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined IP address 192.168.39.88 and MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:55:12.627927   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHPort
	I0719 19:55:12.628159   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHKeyPath
	I0719 19:55:12.628307   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHKeyPath
	I0719 19:55:12.628474   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHUsername
	I0719 19:55:12.628623   75258 main.go:141] libmachine: Using SSH client type: native
	I0719 19:55:12.628803   75258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I0719 19:55:12.628817   75258 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 19:55:12.880873   75258 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 19:55:12.880916   75258 main.go:141] libmachine: Checking connection to Docker...
	I0719 19:55:12.880928   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetURL
	I0719 19:55:12.882434   75258 main.go:141] libmachine: (newest-cni-229943) DBG | Using libvirt version 6000000
	I0719 19:55:12.884800   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:55:12.885236   75258 main.go:141] libmachine: (newest-cni-229943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:49:e9", ip: ""} in network mk-newest-cni-229943: {Iface:virbr2 ExpiryTime:2024-07-19 20:55:04 +0000 UTC Type:0 Mac:52:54:00:e4:49:e9 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:newest-cni-229943 Clientid:01:52:54:00:e4:49:e9}
	I0719 19:55:12.885271   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined IP address 192.168.39.88 and MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:55:12.885433   75258 main.go:141] libmachine: Docker is up and running!
	I0719 19:55:12.885449   75258 main.go:141] libmachine: Reticulating splines...
	I0719 19:55:12.885457   75258 client.go:171] duration metric: took 21.73223431s to LocalClient.Create
	I0719 19:55:12.885482   75258 start.go:167] duration metric: took 21.73231449s to libmachine.API.Create "newest-cni-229943"
	I0719 19:55:12.885494   75258 start.go:293] postStartSetup for "newest-cni-229943" (driver="kvm2")
	I0719 19:55:12.885512   75258 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 19:55:12.885534   75258 main.go:141] libmachine: (newest-cni-229943) Calling .DriverName
	I0719 19:55:12.885771   75258 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 19:55:12.885794   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHHostname
	I0719 19:55:12.888052   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:55:12.888351   75258 main.go:141] libmachine: (newest-cni-229943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:49:e9", ip: ""} in network mk-newest-cni-229943: {Iface:virbr2 ExpiryTime:2024-07-19 20:55:04 +0000 UTC Type:0 Mac:52:54:00:e4:49:e9 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:newest-cni-229943 Clientid:01:52:54:00:e4:49:e9}
	I0719 19:55:12.888376   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined IP address 192.168.39.88 and MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:55:12.888468   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHPort
	I0719 19:55:12.888647   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHKeyPath
	I0719 19:55:12.888831   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHUsername
	I0719 19:55:12.889051   75258 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/newest-cni-229943/id_rsa Username:docker}
	I0719 19:55:12.965673   75258 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 19:55:12.969724   75258 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 19:55:12.969748   75258 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 19:55:12.969823   75258 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 19:55:12.969949   75258 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 19:55:12.970092   75258 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 19:55:12.978899   75258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:55:13.001994   75258 start.go:296] duration metric: took 116.48511ms for postStartSetup
	I0719 19:55:13.002044   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetConfigRaw
	I0719 19:55:13.002668   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetIP
	I0719 19:55:13.005520   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:55:13.006012   75258 main.go:141] libmachine: (newest-cni-229943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:49:e9", ip: ""} in network mk-newest-cni-229943: {Iface:virbr2 ExpiryTime:2024-07-19 20:55:04 +0000 UTC Type:0 Mac:52:54:00:e4:49:e9 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:newest-cni-229943 Clientid:01:52:54:00:e4:49:e9}
	I0719 19:55:13.006043   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined IP address 192.168.39.88 and MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:55:13.006360   75258 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943/config.json ...
	I0719 19:55:13.006535   75258 start.go:128] duration metric: took 21.871036407s to createHost
	I0719 19:55:13.006585   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHHostname
	I0719 19:55:13.009385   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:55:13.009730   75258 main.go:141] libmachine: (newest-cni-229943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:49:e9", ip: ""} in network mk-newest-cni-229943: {Iface:virbr2 ExpiryTime:2024-07-19 20:55:04 +0000 UTC Type:0 Mac:52:54:00:e4:49:e9 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:newest-cni-229943 Clientid:01:52:54:00:e4:49:e9}
	I0719 19:55:13.009758   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined IP address 192.168.39.88 and MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:55:13.009938   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHPort
	I0719 19:55:13.010107   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHKeyPath
	I0719 19:55:13.010302   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHKeyPath
	I0719 19:55:13.010473   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHUsername
	I0719 19:55:13.010668   75258 main.go:141] libmachine: Using SSH client type: native
	I0719 19:55:13.010863   75258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I0719 19:55:13.010877   75258 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 19:55:13.108319   75258 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721418913.077001454
	
	I0719 19:55:13.108344   75258 fix.go:216] guest clock: 1721418913.077001454
	I0719 19:55:13.108355   75258 fix.go:229] Guest: 2024-07-19 19:55:13.077001454 +0000 UTC Remote: 2024-07-19 19:55:13.006548095 +0000 UTC m=+21.979864772 (delta=70.453359ms)
	I0719 19:55:13.108374   75258 fix.go:200] guest clock delta is within tolerance: 70.453359ms
	I0719 19:55:13.108379   75258 start.go:83] releasing machines lock for "newest-cni-229943", held for 21.972969279s
	I0719 19:55:13.108401   75258 main.go:141] libmachine: (newest-cni-229943) Calling .DriverName
	I0719 19:55:13.108685   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetIP
	I0719 19:55:13.111586   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:55:13.111980   75258 main.go:141] libmachine: (newest-cni-229943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:49:e9", ip: ""} in network mk-newest-cni-229943: {Iface:virbr2 ExpiryTime:2024-07-19 20:55:04 +0000 UTC Type:0 Mac:52:54:00:e4:49:e9 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:newest-cni-229943 Clientid:01:52:54:00:e4:49:e9}
	I0719 19:55:13.112004   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined IP address 192.168.39.88 and MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:55:13.112194   75258 main.go:141] libmachine: (newest-cni-229943) Calling .DriverName
	I0719 19:55:13.112770   75258 main.go:141] libmachine: (newest-cni-229943) Calling .DriverName
	I0719 19:55:13.112979   75258 main.go:141] libmachine: (newest-cni-229943) Calling .DriverName
	I0719 19:55:13.113061   75258 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 19:55:13.113112   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHHostname
	I0719 19:55:13.113211   75258 ssh_runner.go:195] Run: cat /version.json
	I0719 19:55:13.113231   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHHostname
	I0719 19:55:13.116067   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:55:13.116419   75258 main.go:141] libmachine: (newest-cni-229943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:49:e9", ip: ""} in network mk-newest-cni-229943: {Iface:virbr2 ExpiryTime:2024-07-19 20:55:04 +0000 UTC Type:0 Mac:52:54:00:e4:49:e9 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:newest-cni-229943 Clientid:01:52:54:00:e4:49:e9}
	I0719 19:55:13.116452   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined IP address 192.168.39.88 and MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:55:13.116569   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:55:13.116577   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHPort
	I0719 19:55:13.116764   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHKeyPath
	I0719 19:55:13.116947   75258 main.go:141] libmachine: (newest-cni-229943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:49:e9", ip: ""} in network mk-newest-cni-229943: {Iface:virbr2 ExpiryTime:2024-07-19 20:55:04 +0000 UTC Type:0 Mac:52:54:00:e4:49:e9 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:newest-cni-229943 Clientid:01:52:54:00:e4:49:e9}
	I0719 19:55:13.116955   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHUsername
	I0719 19:55:13.116971   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined IP address 192.168.39.88 and MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:55:13.117120   75258 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/newest-cni-229943/id_rsa Username:docker}
	I0719 19:55:13.117189   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHPort
	I0719 19:55:13.117328   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHKeyPath
	I0719 19:55:13.117507   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHUsername
	I0719 19:55:13.117697   75258 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/newest-cni-229943/id_rsa Username:docker}
	I0719 19:55:13.230386   75258 ssh_runner.go:195] Run: systemctl --version
	I0719 19:55:13.236468   75258 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 19:55:13.401276   75258 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 19:55:13.407463   75258 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 19:55:13.407570   75258 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 19:55:13.423736   75258 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 19:55:13.423763   75258 start.go:495] detecting cgroup driver to use...
	I0719 19:55:13.423855   75258 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 19:55:13.440147   75258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 19:55:13.459614   75258 docker.go:217] disabling cri-docker service (if available) ...
	I0719 19:55:13.459670   75258 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 19:55:13.476778   75258 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 19:55:13.492099   75258 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 19:55:13.612922   75258 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 19:55:13.756948   75258 docker.go:233] disabling docker service ...
	I0719 19:55:13.757004   75258 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 19:55:13.770672   75258 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 19:55:13.782875   75258 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 19:55:13.926134   75258 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 19:55:14.053640   75258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 19:55:14.067762   75258 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 19:55:14.085553   75258 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0719 19:55:14.085620   75258 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:55:14.095614   75258 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 19:55:14.095664   75258 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:55:14.105438   75258 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:55:14.115161   75258 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:55:14.124893   75258 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 19:55:14.135132   75258 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:55:14.145070   75258 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:55:14.163544   75258 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:55:14.176198   75258 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 19:55:14.189727   75258 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 19:55:14.189789   75258 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 19:55:14.205717   75258 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 19:55:14.218005   75258 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:55:14.343541   75258 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 19:55:14.487663   75258 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 19:55:14.487752   75258 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 19:55:14.492520   75258 start.go:563] Will wait 60s for crictl version
	I0719 19:55:14.492587   75258 ssh_runner.go:195] Run: which crictl
	I0719 19:55:14.496328   75258 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 19:55:14.539617   75258 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 19:55:14.539698   75258 ssh_runner.go:195] Run: crio --version
	I0719 19:55:14.570129   75258 ssh_runner.go:195] Run: crio --version
	I0719 19:55:14.604715   75258 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0719 19:55:14.605824   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetIP
	I0719 19:55:14.609348   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:55:14.609829   75258 main.go:141] libmachine: (newest-cni-229943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:49:e9", ip: ""} in network mk-newest-cni-229943: {Iface:virbr2 ExpiryTime:2024-07-19 20:55:04 +0000 UTC Type:0 Mac:52:54:00:e4:49:e9 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:newest-cni-229943 Clientid:01:52:54:00:e4:49:e9}
	I0719 19:55:14.609856   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined IP address 192.168.39.88 and MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:55:14.610115   75258 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 19:55:14.614133   75258 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:55:14.627545   75258 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0719 19:55:14.628795   75258 kubeadm.go:883] updating cluster {Name:newest-cni-229943 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:newest-cni-229943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.88 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 19:55:14.628914   75258 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0719 19:55:14.628977   75258 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:55:14.664069   75258 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0719 19:55:14.664137   75258 ssh_runner.go:195] Run: which lz4
	I0719 19:55:14.668188   75258 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 19:55:14.672492   75258 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 19:55:14.672522   75258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (387176433 bytes)
	I0719 19:55:15.930684   75258 crio.go:462] duration metric: took 1.262530698s to copy over tarball
	I0719 19:55:15.930762   75258 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 19:55:14.861508   75617 main.go:141] libmachine: (auto-179125) Waiting to get IP...
	I0719 19:55:14.862560   75617 main.go:141] libmachine: (auto-179125) DBG | domain auto-179125 has defined MAC address 52:54:00:c8:d9:03 in network mk-auto-179125
	I0719 19:55:14.863139   75617 main.go:141] libmachine: (auto-179125) DBG | unable to find current IP address of domain auto-179125 in network mk-auto-179125
	I0719 19:55:14.863169   75617 main.go:141] libmachine: (auto-179125) DBG | I0719 19:55:14.863124   75727 retry.go:31] will retry after 248.441939ms: waiting for machine to come up
	I0719 19:55:15.113821   75617 main.go:141] libmachine: (auto-179125) DBG | domain auto-179125 has defined MAC address 52:54:00:c8:d9:03 in network mk-auto-179125
	I0719 19:55:15.114304   75617 main.go:141] libmachine: (auto-179125) DBG | unable to find current IP address of domain auto-179125 in network mk-auto-179125
	I0719 19:55:15.114329   75617 main.go:141] libmachine: (auto-179125) DBG | I0719 19:55:15.114260   75727 retry.go:31] will retry after 316.435809ms: waiting for machine to come up
	I0719 19:55:15.432921   75617 main.go:141] libmachine: (auto-179125) DBG | domain auto-179125 has defined MAC address 52:54:00:c8:d9:03 in network mk-auto-179125
	I0719 19:55:15.433565   75617 main.go:141] libmachine: (auto-179125) DBG | unable to find current IP address of domain auto-179125 in network mk-auto-179125
	I0719 19:55:15.433596   75617 main.go:141] libmachine: (auto-179125) DBG | I0719 19:55:15.433456   75727 retry.go:31] will retry after 384.119601ms: waiting for machine to come up
	I0719 19:55:15.819164   75617 main.go:141] libmachine: (auto-179125) DBG | domain auto-179125 has defined MAC address 52:54:00:c8:d9:03 in network mk-auto-179125
	I0719 19:55:15.819673   75617 main.go:141] libmachine: (auto-179125) DBG | unable to find current IP address of domain auto-179125 in network mk-auto-179125
	I0719 19:55:15.819700   75617 main.go:141] libmachine: (auto-179125) DBG | I0719 19:55:15.819637   75727 retry.go:31] will retry after 601.23179ms: waiting for machine to come up
	I0719 19:55:16.422537   75617 main.go:141] libmachine: (auto-179125) DBG | domain auto-179125 has defined MAC address 52:54:00:c8:d9:03 in network mk-auto-179125
	I0719 19:55:16.423034   75617 main.go:141] libmachine: (auto-179125) DBG | unable to find current IP address of domain auto-179125 in network mk-auto-179125
	I0719 19:55:16.423055   75617 main.go:141] libmachine: (auto-179125) DBG | I0719 19:55:16.422998   75727 retry.go:31] will retry after 584.290864ms: waiting for machine to come up
	I0719 19:55:17.008722   75617 main.go:141] libmachine: (auto-179125) DBG | domain auto-179125 has defined MAC address 52:54:00:c8:d9:03 in network mk-auto-179125
	I0719 19:55:17.009406   75617 main.go:141] libmachine: (auto-179125) DBG | unable to find current IP address of domain auto-179125 in network mk-auto-179125
	I0719 19:55:17.009435   75617 main.go:141] libmachine: (auto-179125) DBG | I0719 19:55:17.009367   75727 retry.go:31] will retry after 951.488263ms: waiting for machine to come up
	I0719 19:55:17.962086   75617 main.go:141] libmachine: (auto-179125) DBG | domain auto-179125 has defined MAC address 52:54:00:c8:d9:03 in network mk-auto-179125
	I0719 19:55:17.962615   75617 main.go:141] libmachine: (auto-179125) DBG | unable to find current IP address of domain auto-179125 in network mk-auto-179125
	I0719 19:55:17.962644   75617 main.go:141] libmachine: (auto-179125) DBG | I0719 19:55:17.962534   75727 retry.go:31] will retry after 1.032122583s: waiting for machine to come up
	I0719 19:55:18.996497   75617 main.go:141] libmachine: (auto-179125) DBG | domain auto-179125 has defined MAC address 52:54:00:c8:d9:03 in network mk-auto-179125
	I0719 19:55:18.996971   75617 main.go:141] libmachine: (auto-179125) DBG | unable to find current IP address of domain auto-179125 in network mk-auto-179125
	I0719 19:55:18.997000   75617 main.go:141] libmachine: (auto-179125) DBG | I0719 19:55:18.996917   75727 retry.go:31] will retry after 1.480475556s: waiting for machine to come up
	I0719 19:55:18.022359   75258 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.09156806s)
	I0719 19:55:18.022385   75258 crio.go:469] duration metric: took 2.091673202s to extract the tarball
	I0719 19:55:18.022393   75258 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 19:55:18.058652   75258 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:55:18.102200   75258 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 19:55:18.102227   75258 cache_images.go:84] Images are preloaded, skipping loading
	I0719 19:55:18.102235   75258 kubeadm.go:934] updating node { 192.168.39.88 8443 v1.31.0-beta.0 crio true true} ...
	I0719 19:55:18.102393   75258 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-229943 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.88
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-229943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 19:55:18.102483   75258 ssh_runner.go:195] Run: crio config
	I0719 19:55:18.155218   75258 cni.go:84] Creating CNI manager for ""
	I0719 19:55:18.155238   75258 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:55:18.155247   75258 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0719 19:55:18.155269   75258 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.88 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-229943 NodeName:newest-cni-229943 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.88"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.39.88 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 19:55:18.155402   75258 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.88
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-229943"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.88
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.88"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 19:55:18.155462   75258 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0719 19:55:18.166624   75258 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 19:55:18.166694   75258 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 19:55:18.177149   75258 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (360 bytes)
	I0719 19:55:18.193477   75258 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0719 19:55:18.210176   75258 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I0719 19:55:18.227968   75258 ssh_runner.go:195] Run: grep 192.168.39.88	control-plane.minikube.internal$ /etc/hosts
	I0719 19:55:18.231728   75258 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.88	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:55:18.244001   75258 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:55:18.379726   75258 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:55:18.400600   75258 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943 for IP: 192.168.39.88
	I0719 19:55:18.400628   75258 certs.go:194] generating shared ca certs ...
	I0719 19:55:18.400649   75258 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:55:18.400812   75258 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 19:55:18.400863   75258 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 19:55:18.400873   75258 certs.go:256] generating profile certs ...
	I0719 19:55:18.400975   75258 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943/client.key
	I0719 19:55:18.400993   75258 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943/client.crt with IP's: []
	I0719 19:55:18.460231   75258 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943/client.crt ...
	I0719 19:55:18.460265   75258 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943/client.crt: {Name:mk3adc94de154eee8415726e35bfc3b2eeacfba2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:55:18.460475   75258 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943/client.key ...
	I0719 19:55:18.460495   75258 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943/client.key: {Name:mk0a9dec32414a9d425f40a3c36df4716fd7c5a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:55:18.460630   75258 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943/apiserver.key.f8d6a172
	I0719 19:55:18.460653   75258 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943/apiserver.crt.f8d6a172 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.88]
	I0719 19:55:18.551042   75258 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943/apiserver.crt.f8d6a172 ...
	I0719 19:55:18.551070   75258 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943/apiserver.crt.f8d6a172: {Name:mkb93a107f957b0ab665bd3f6a93cec77778fb01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:55:18.551267   75258 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943/apiserver.key.f8d6a172 ...
	I0719 19:55:18.551284   75258 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943/apiserver.key.f8d6a172: {Name:mk10438591f7b993ec943b129b4b8c6435c4f74d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:55:18.551398   75258 certs.go:381] copying /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943/apiserver.crt.f8d6a172 -> /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943/apiserver.crt
	I0719 19:55:18.551496   75258 certs.go:385] copying /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943/apiserver.key.f8d6a172 -> /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943/apiserver.key
	I0719 19:55:18.551554   75258 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943/proxy-client.key
	I0719 19:55:18.551571   75258 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943/proxy-client.crt with IP's: []
	I0719 19:55:18.634270   75258 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943/proxy-client.crt ...
	I0719 19:55:18.634307   75258 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943/proxy-client.crt: {Name:mke222cdc7651c4b7b21bbe71a37c3e15d01f6a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:55:18.634529   75258 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943/proxy-client.key ...
	I0719 19:55:18.634551   75258 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943/proxy-client.key: {Name:mka95af8220868ade917a5b7d0e6a14fac6a8625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:55:18.634826   75258 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 19:55:18.634883   75258 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 19:55:18.634898   75258 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 19:55:18.634941   75258 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 19:55:18.634972   75258 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 19:55:18.635003   75258 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 19:55:18.635055   75258 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:55:18.635641   75258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 19:55:18.661675   75258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 19:55:18.683973   75258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 19:55:18.706293   75258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 19:55:18.730294   75258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0719 19:55:18.754107   75258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 19:55:18.778769   75258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 19:55:18.801611   75258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 19:55:18.824551   75258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 19:55:18.846875   75258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 19:55:18.874048   75258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 19:55:18.897048   75258 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 19:55:18.913841   75258 ssh_runner.go:195] Run: openssl version
	I0719 19:55:18.919535   75258 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 19:55:18.933428   75258 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 19:55:18.938765   75258 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 19:55:18.938818   75258 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 19:55:18.946392   75258 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 19:55:18.960022   75258 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 19:55:18.973619   75258 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 19:55:18.979261   75258 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 19:55:18.979322   75258 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 19:55:18.986470   75258 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 19:55:19.000425   75258 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 19:55:19.014119   75258 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:55:19.019504   75258 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:55:19.019560   75258 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:55:19.026462   75258 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 19:55:19.038791   75258 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 19:55:19.042867   75258 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 19:55:19.042928   75258 kubeadm.go:392] StartCluster: {Name:newest-cni-229943 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:newest-cni-229943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.88 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:55:19.043014   75258 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 19:55:19.043080   75258 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:55:19.085006   75258 cri.go:89] found id: ""
	I0719 19:55:19.085109   75258 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 19:55:19.095332   75258 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:55:19.105188   75258 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:55:19.114589   75258 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:55:19.114609   75258 kubeadm.go:157] found existing configuration files:
	
	I0719 19:55:19.114658   75258 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:55:19.123119   75258 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:55:19.123191   75258 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:55:19.132352   75258 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:55:19.140977   75258 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:55:19.141039   75258 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:55:19.150064   75258 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:55:19.158747   75258 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:55:19.158808   75258 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:55:19.172000   75258 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:55:19.181762   75258 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:55:19.181826   75258 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:55:19.201622   75258 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 19:55:19.325720   75258 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0719 19:55:19.325874   75258 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 19:55:19.439851   75258 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 19:55:19.440062   75258 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 19:55:19.440203   75258 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0719 19:55:19.450916   75258 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 19:55:19.453528   75258 out.go:204]   - Generating certificates and keys ...
	I0719 19:55:19.454049   75258 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 19:55:19.454145   75258 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 19:55:19.736585   75258 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0719 19:55:20.116478   75258 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0719 19:55:20.437688   75258 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0719 19:55:20.566139   75258 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0719 19:55:20.651582   75258 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0719 19:55:20.651787   75258 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-229943] and IPs [192.168.39.88 127.0.0.1 ::1]
	I0719 19:55:20.775534   75258 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0719 19:55:20.775749   75258 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-229943] and IPs [192.168.39.88 127.0.0.1 ::1]
	I0719 19:55:21.021500   75258 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0719 19:55:21.106275   75258 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0719 19:55:21.375427   75258 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0719 19:55:21.375622   75258 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 19:55:21.499606   75258 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 19:55:21.583266   75258 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 19:55:21.679289   75258 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 19:55:21.978655   75258 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 19:55:22.260003   75258 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 19:55:22.260636   75258 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 19:55:22.264021   75258 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 19:55:20.479403   75617 main.go:141] libmachine: (auto-179125) DBG | domain auto-179125 has defined MAC address 52:54:00:c8:d9:03 in network mk-auto-179125
	I0719 19:55:20.479903   75617 main.go:141] libmachine: (auto-179125) DBG | unable to find current IP address of domain auto-179125 in network mk-auto-179125
	I0719 19:55:20.479935   75617 main.go:141] libmachine: (auto-179125) DBG | I0719 19:55:20.479849   75727 retry.go:31] will retry after 1.446419538s: waiting for machine to come up
	I0719 19:55:21.928565   75617 main.go:141] libmachine: (auto-179125) DBG | domain auto-179125 has defined MAC address 52:54:00:c8:d9:03 in network mk-auto-179125
	I0719 19:55:21.929101   75617 main.go:141] libmachine: (auto-179125) DBG | unable to find current IP address of domain auto-179125 in network mk-auto-179125
	I0719 19:55:21.929129   75617 main.go:141] libmachine: (auto-179125) DBG | I0719 19:55:21.929055   75727 retry.go:31] will retry after 2.008305587s: waiting for machine to come up
	I0719 19:55:23.940654   75617 main.go:141] libmachine: (auto-179125) DBG | domain auto-179125 has defined MAC address 52:54:00:c8:d9:03 in network mk-auto-179125
	I0719 19:55:23.941213   75617 main.go:141] libmachine: (auto-179125) DBG | unable to find current IP address of domain auto-179125 in network mk-auto-179125
	I0719 19:55:23.941236   75617 main.go:141] libmachine: (auto-179125) DBG | I0719 19:55:23.941165   75727 retry.go:31] will retry after 1.764126007s: waiting for machine to come up
	I0719 19:55:22.266564   75258 out.go:204]   - Booting up control plane ...
	I0719 19:55:22.266691   75258 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 19:55:22.266785   75258 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 19:55:22.267284   75258 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 19:55:22.290673   75258 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 19:55:22.297816   75258 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 19:55:22.297895   75258 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 19:55:22.456218   75258 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 19:55:22.456332   75258 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 19:55:23.457288   75258 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001458976s
	I0719 19:55:23.457414   75258 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	
	
	==> CRI-O <==
	Jul 19 19:55:28 embed-certs-889918 crio[720]: time="2024-07-19 19:55:28.473077116Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721418928473053729,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b11bf95f-ecec-47e9-8d54-1d30e6c78774 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:55:28 embed-certs-889918 crio[720]: time="2024-07-19 19:55:28.473531165Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8a791b84-8a81-43a8-8838-dfaf532ad73b name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:55:28 embed-certs-889918 crio[720]: time="2024-07-19 19:55:28.473616127Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8a791b84-8a81-43a8-8838-dfaf532ad73b name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:55:28 embed-certs-889918 crio[720]: time="2024-07-19 19:55:28.473830536Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040,PodSandboxId:f3324641df8191a240ebdb136f7ca9f8d63966cb2d71505fa977c737615a172b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721417718151404390,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32274055-6a2d-4cf9-8ae8-3c5f7cdc66db,},Annotations:map[string]string{io.kubernetes.container.hash: 60d8078a,io.kubernetes.container.restartCount: 3,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e,PodSandboxId:4f9dbafba502eb72b369dbeef22b0e8adf20456360b59933e5b5c98ee93ccd39,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721417702835392362,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-889918,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d768a85f632b7b19b77fd161977ec72,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc7a3241c865128c76555619cdc8b1c2c5eef8fac386a32119b18be40113f9bd,PodSandboxId:f9fcc8073d659d4d38b14afa81c811af365ff04c21f792bd3feb48283e65a679,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721417697929057047,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9dc201e7-6e82-4a8b-8272-038963f23b13,},Annotations:map[string]string{io.kubernetes.container.hash: fd43310a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb,PodSandboxId:d8745d8bc0585e82361a075bb4fcaa7a7ab76e24636a9f2542a157eb2b2544bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721417695006176796,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7h48w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd691a13-e571-4148-bef0-ba0552277312,},Annotations:map[string]string{io.kubernetes.container.hash: 156afc3a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e,PodSandboxId:dad11fc8c21171582ffbc5afa86b0c912a0f9d1c21f3deed3dd4e1d514288c14,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721417687345596070,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5fcgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67ef345b-f2ef-4e18-a55c-91e2c08a
37b8,},Annotations:map[string]string{io.kubernetes.container.hash: 61970ca5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91,PodSandboxId:f3324641df8191a240ebdb136f7ca9f8d63966cb2d71505fa977c737615a172b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721417687292746960,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32274055-6a2d-4cf9-8ae8-3c5f7cdc66db,},Annota
tions:map[string]string{io.kubernetes.container.hash: 60d8078a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c,PodSandboxId:380d6b10300658327e0245a9de2140bc45e48f39f48c181237a4f7c8dec7a028,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721417683027506333,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-889918,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3753ac9e0fe774872de78b19b61585b,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 768f0e28,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5,PodSandboxId:0bbaa257463ac942bb33a6a56689e4aa343b916154da115561bd923195238f6a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721417683018442852,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-889918,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdb9af376988ab1bc110fa346242eb5f,},Annotations:map[string]string{io.kubernetes.container.hash:
3c1e0263,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498,PodSandboxId:ae68220a155c45447d99f956f9aa4b9ae7f83a337cc541216e69b08fc0a68220,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721417683012400479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-889918,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8be6b19f7bf77976d0844cedb8035b7,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8a791b84-8a81-43a8-8838-dfaf532ad73b name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:55:28 embed-certs-889918 crio[720]: time="2024-07-19 19:55:28.516485578Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=40d741cc-8530-4747-9fac-6ea14cace495 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:55:28 embed-certs-889918 crio[720]: time="2024-07-19 19:55:28.516622834Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=40d741cc-8530-4747-9fac-6ea14cace495 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:55:28 embed-certs-889918 crio[720]: time="2024-07-19 19:55:28.518207447Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8ff44ee9-f2fc-4941-9727-c8c3e77aad2b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:55:28 embed-certs-889918 crio[720]: time="2024-07-19 19:55:28.518837449Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721418928518767581,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8ff44ee9-f2fc-4941-9727-c8c3e77aad2b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:55:28 embed-certs-889918 crio[720]: time="2024-07-19 19:55:28.519709958Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=334fb115-f703-486f-8d54-26958c669a8a name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:55:28 embed-certs-889918 crio[720]: time="2024-07-19 19:55:28.519835384Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=334fb115-f703-486f-8d54-26958c669a8a name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:55:28 embed-certs-889918 crio[720]: time="2024-07-19 19:55:28.520153770Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040,PodSandboxId:f3324641df8191a240ebdb136f7ca9f8d63966cb2d71505fa977c737615a172b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721417718151404390,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32274055-6a2d-4cf9-8ae8-3c5f7cdc66db,},Annotations:map[string]string{io.kubernetes.container.hash: 60d8078a,io.kubernetes.container.restartCount: 3,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e,PodSandboxId:4f9dbafba502eb72b369dbeef22b0e8adf20456360b59933e5b5c98ee93ccd39,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721417702835392362,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-889918,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d768a85f632b7b19b77fd161977ec72,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc7a3241c865128c76555619cdc8b1c2c5eef8fac386a32119b18be40113f9bd,PodSandboxId:f9fcc8073d659d4d38b14afa81c811af365ff04c21f792bd3feb48283e65a679,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721417697929057047,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9dc201e7-6e82-4a8b-8272-038963f23b13,},Annotations:map[string]string{io.kubernetes.container.hash: fd43310a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb,PodSandboxId:d8745d8bc0585e82361a075bb4fcaa7a7ab76e24636a9f2542a157eb2b2544bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721417695006176796,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7h48w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd691a13-e571-4148-bef0-ba0552277312,},Annotations:map[string]string{io.kubernetes.container.hash: 156afc3a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e,PodSandboxId:dad11fc8c21171582ffbc5afa86b0c912a0f9d1c21f3deed3dd4e1d514288c14,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721417687345596070,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5fcgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67ef345b-f2ef-4e18-a55c-91e2c08a
37b8,},Annotations:map[string]string{io.kubernetes.container.hash: 61970ca5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91,PodSandboxId:f3324641df8191a240ebdb136f7ca9f8d63966cb2d71505fa977c737615a172b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721417687292746960,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32274055-6a2d-4cf9-8ae8-3c5f7cdc66db,},Annota
tions:map[string]string{io.kubernetes.container.hash: 60d8078a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c,PodSandboxId:380d6b10300658327e0245a9de2140bc45e48f39f48c181237a4f7c8dec7a028,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721417683027506333,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-889918,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3753ac9e0fe774872de78b19b61585b,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 768f0e28,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5,PodSandboxId:0bbaa257463ac942bb33a6a56689e4aa343b916154da115561bd923195238f6a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721417683018442852,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-889918,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdb9af376988ab1bc110fa346242eb5f,},Annotations:map[string]string{io.kubernetes.container.hash:
3c1e0263,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498,PodSandboxId:ae68220a155c45447d99f956f9aa4b9ae7f83a337cc541216e69b08fc0a68220,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721417683012400479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-889918,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8be6b19f7bf77976d0844cedb8035b7,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=334fb115-f703-486f-8d54-26958c669a8a name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:55:28 embed-certs-889918 crio[720]: time="2024-07-19 19:55:28.558472769Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b847c4ca-06ef-43f5-8e06-26a2161806ad name=/runtime.v1.RuntimeService/Version
	Jul 19 19:55:28 embed-certs-889918 crio[720]: time="2024-07-19 19:55:28.558587325Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b847c4ca-06ef-43f5-8e06-26a2161806ad name=/runtime.v1.RuntimeService/Version
	Jul 19 19:55:28 embed-certs-889918 crio[720]: time="2024-07-19 19:55:28.559741475Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2a6f9f3c-8507-45f0-a335-cfd6d863dc9a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:55:28 embed-certs-889918 crio[720]: time="2024-07-19 19:55:28.560497138Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721418928560466127,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2a6f9f3c-8507-45f0-a335-cfd6d863dc9a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:55:28 embed-certs-889918 crio[720]: time="2024-07-19 19:55:28.561242994Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=718c1eca-0c18-47e9-856d-08d7975227f1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:55:28 embed-certs-889918 crio[720]: time="2024-07-19 19:55:28.561327280Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=718c1eca-0c18-47e9-856d-08d7975227f1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:55:28 embed-certs-889918 crio[720]: time="2024-07-19 19:55:28.561610403Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040,PodSandboxId:f3324641df8191a240ebdb136f7ca9f8d63966cb2d71505fa977c737615a172b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721417718151404390,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32274055-6a2d-4cf9-8ae8-3c5f7cdc66db,},Annotations:map[string]string{io.kubernetes.container.hash: 60d8078a,io.kubernetes.container.restartCount: 3,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e,PodSandboxId:4f9dbafba502eb72b369dbeef22b0e8adf20456360b59933e5b5c98ee93ccd39,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721417702835392362,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-889918,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d768a85f632b7b19b77fd161977ec72,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc7a3241c865128c76555619cdc8b1c2c5eef8fac386a32119b18be40113f9bd,PodSandboxId:f9fcc8073d659d4d38b14afa81c811af365ff04c21f792bd3feb48283e65a679,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721417697929057047,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9dc201e7-6e82-4a8b-8272-038963f23b13,},Annotations:map[string]string{io.kubernetes.container.hash: fd43310a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb,PodSandboxId:d8745d8bc0585e82361a075bb4fcaa7a7ab76e24636a9f2542a157eb2b2544bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721417695006176796,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7h48w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd691a13-e571-4148-bef0-ba0552277312,},Annotations:map[string]string{io.kubernetes.container.hash: 156afc3a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e,PodSandboxId:dad11fc8c21171582ffbc5afa86b0c912a0f9d1c21f3deed3dd4e1d514288c14,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721417687345596070,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5fcgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67ef345b-f2ef-4e18-a55c-91e2c08a
37b8,},Annotations:map[string]string{io.kubernetes.container.hash: 61970ca5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91,PodSandboxId:f3324641df8191a240ebdb136f7ca9f8d63966cb2d71505fa977c737615a172b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721417687292746960,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32274055-6a2d-4cf9-8ae8-3c5f7cdc66db,},Annota
tions:map[string]string{io.kubernetes.container.hash: 60d8078a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c,PodSandboxId:380d6b10300658327e0245a9de2140bc45e48f39f48c181237a4f7c8dec7a028,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721417683027506333,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-889918,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3753ac9e0fe774872de78b19b61585b,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 768f0e28,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5,PodSandboxId:0bbaa257463ac942bb33a6a56689e4aa343b916154da115561bd923195238f6a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721417683018442852,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-889918,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdb9af376988ab1bc110fa346242eb5f,},Annotations:map[string]string{io.kubernetes.container.hash:
3c1e0263,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498,PodSandboxId:ae68220a155c45447d99f956f9aa4b9ae7f83a337cc541216e69b08fc0a68220,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721417683012400479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-889918,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8be6b19f7bf77976d0844cedb8035b7,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=718c1eca-0c18-47e9-856d-08d7975227f1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:55:28 embed-certs-889918 crio[720]: time="2024-07-19 19:55:28.596343216Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=536a0457-3e21-4e8d-a201-e584af4a6aeb name=/runtime.v1.RuntimeService/Version
	Jul 19 19:55:28 embed-certs-889918 crio[720]: time="2024-07-19 19:55:28.596433573Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=536a0457-3e21-4e8d-a201-e584af4a6aeb name=/runtime.v1.RuntimeService/Version
	Jul 19 19:55:28 embed-certs-889918 crio[720]: time="2024-07-19 19:55:28.597612345Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d7d9e8bc-4706-4325-9ff5-135530f3933f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:55:28 embed-certs-889918 crio[720]: time="2024-07-19 19:55:28.598126266Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721418928598093278,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d7d9e8bc-4706-4325-9ff5-135530f3933f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:55:28 embed-certs-889918 crio[720]: time="2024-07-19 19:55:28.598684224Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f05226db-7d75-4959-b2f5-b2bcae033f5d name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:55:28 embed-certs-889918 crio[720]: time="2024-07-19 19:55:28.598754506Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f05226db-7d75-4959-b2f5-b2bcae033f5d name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:55:28 embed-certs-889918 crio[720]: time="2024-07-19 19:55:28.599072870Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040,PodSandboxId:f3324641df8191a240ebdb136f7ca9f8d63966cb2d71505fa977c737615a172b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721417718151404390,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32274055-6a2d-4cf9-8ae8-3c5f7cdc66db,},Annotations:map[string]string{io.kubernetes.container.hash: 60d8078a,io.kubernetes.container.restartCount: 3,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e,PodSandboxId:4f9dbafba502eb72b369dbeef22b0e8adf20456360b59933e5b5c98ee93ccd39,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721417702835392362,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-889918,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d768a85f632b7b19b77fd161977ec72,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc7a3241c865128c76555619cdc8b1c2c5eef8fac386a32119b18be40113f9bd,PodSandboxId:f9fcc8073d659d4d38b14afa81c811af365ff04c21f792bd3feb48283e65a679,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721417697929057047,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9dc201e7-6e82-4a8b-8272-038963f23b13,},Annotations:map[string]string{io.kubernetes.container.hash: fd43310a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb,PodSandboxId:d8745d8bc0585e82361a075bb4fcaa7a7ab76e24636a9f2542a157eb2b2544bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721417695006176796,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7h48w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd691a13-e571-4148-bef0-ba0552277312,},Annotations:map[string]string{io.kubernetes.container.hash: 156afc3a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e,PodSandboxId:dad11fc8c21171582ffbc5afa86b0c912a0f9d1c21f3deed3dd4e1d514288c14,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721417687345596070,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5fcgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67ef345b-f2ef-4e18-a55c-91e2c08a
37b8,},Annotations:map[string]string{io.kubernetes.container.hash: 61970ca5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91,PodSandboxId:f3324641df8191a240ebdb136f7ca9f8d63966cb2d71505fa977c737615a172b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721417687292746960,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32274055-6a2d-4cf9-8ae8-3c5f7cdc66db,},Annota
tions:map[string]string{io.kubernetes.container.hash: 60d8078a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c,PodSandboxId:380d6b10300658327e0245a9de2140bc45e48f39f48c181237a4f7c8dec7a028,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721417683027506333,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-889918,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3753ac9e0fe774872de78b19b61585b,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 768f0e28,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5,PodSandboxId:0bbaa257463ac942bb33a6a56689e4aa343b916154da115561bd923195238f6a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721417683018442852,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-889918,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdb9af376988ab1bc110fa346242eb5f,},Annotations:map[string]string{io.kubernetes.container.hash:
3c1e0263,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498,PodSandboxId:ae68220a155c45447d99f956f9aa4b9ae7f83a337cc541216e69b08fc0a68220,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721417683012400479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-889918,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8be6b19f7bf77976d0844cedb8035b7,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f05226db-7d75-4959-b2f5-b2bcae033f5d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	aec5bd5dcc9a5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       3                   f3324641df819       storage-provisioner
	560ffa4638473       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      20 minutes ago      Running             kube-scheduler            1                   4f9dbafba502e       kube-scheduler-embed-certs-889918
	dc7a3241c8651       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   f9fcc8073d659       busybox
	9c90b21a180b4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      20 minutes ago      Running             coredns                   1                   d8745d8bc0585       coredns-7db6d8ff4d-7h48w
	4b54e8db4eb45       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      20 minutes ago      Running             kube-proxy                1                   dad11fc8c2117       kube-proxy-5fcgs
	f0710a5e9dce0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       2                   f3324641df819       storage-provisioner
	46b13ce63930e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      20 minutes ago      Running             etcd                      1                   380d6b1030065       etcd-embed-certs-889918
	82586fde0cf25       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      20 minutes ago      Running             kube-apiserver            1                   0bbaa257463ac       kube-apiserver-embed-certs-889918
	4e88e6527a2ba       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      20 minutes ago      Running             kube-controller-manager   1                   ae68220a155c4       kube-controller-manager-embed-certs-889918
	
	
	==> coredns [9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:55154 - 26044 "HINFO IN 4126011779228178089.2074560202026123776. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008880797s
	
	
	==> describe nodes <==
	Name:               embed-certs-889918
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-889918
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221
	                    minikube.k8s.io/name=embed-certs-889918
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T19_26_26_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 19:26:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-889918
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 19:55:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 19:50:36 +0000   Fri, 19 Jul 2024 19:26:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 19:50:36 +0000   Fri, 19 Jul 2024 19:26:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 19:50:36 +0000   Fri, 19 Jul 2024 19:26:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 19:50:36 +0000   Fri, 19 Jul 2024 19:34:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.107
	  Hostname:    embed-certs-889918
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a76992b08f504b5080d8420bec37f7ca
	  System UUID:                a76992b0-8f50-4b50-80d8-420bec37f7ca
	  Boot ID:                    28064893-1d02-4b22-ba14-280f6e8baa55
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-7db6d8ff4d-7h48w                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-embed-certs-889918                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-embed-certs-889918             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-embed-certs-889918    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-5fcgs                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-embed-certs-889918             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 metrics-server-569cc877fc-sfc85               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m                kubelet          Node embed-certs-889918 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node embed-certs-889918 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node embed-certs-889918 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                29m                kubelet          Node embed-certs-889918 status is now: NodeReady
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           28m                node-controller  Node embed-certs-889918 event: Registered Node embed-certs-889918 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node embed-certs-889918 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node embed-certs-889918 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node embed-certs-889918 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           20m                node-controller  Node embed-certs-889918 event: Registered Node embed-certs-889918 in Controller
	
	
	==> dmesg <==
	[Jul19 19:34] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.058511] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.051032] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.554846] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.805303] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.529390] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.361469] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.058177] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057030] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.193187] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.132131] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.301687] systemd-fstab-generator[703]: Ignoring "noauto" option for root device
	[  +4.273336] systemd-fstab-generator[801]: Ignoring "noauto" option for root device
	[  +0.061117] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.765459] systemd-fstab-generator[925]: Ignoring "noauto" option for root device
	[  +5.532820] kauditd_printk_skb: 87 callbacks suppressed
	[  +1.818101] systemd-fstab-generator[1473]: Ignoring "noauto" option for root device
	[  +3.945177] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.729333] kauditd_printk_skb: 43 callbacks suppressed
	[Jul19 19:35] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c] <==
	{"level":"warn","ts":"2024-07-19T19:35:03.767031Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T19:35:03.121428Z","time spent":"645.123925ms","remote":"127.0.0.1:41248","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":776,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kube-scheduler-embed-certs-889918.17e3b447d0511b51\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-scheduler-embed-certs-889918.17e3b447d0511b51\" value_size:679 lease:8341951617747231911 >> failure:<>"}
	{"level":"warn","ts":"2024-07-19T19:35:04.168392Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"950.648276ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-sfc85\" ","response":"range_response_count:1 size:4283"}
	{"level":"info","ts":"2024-07-19T19:35:04.168467Z","caller":"traceutil/trace.go:171","msg":"trace[416421072] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-sfc85; range_end:; response_count:1; response_revision:585; }","duration":"950.762684ms","start":"2024-07-19T19:35:03.217689Z","end":"2024-07-19T19:35:04.168451Z","steps":["trace[416421072] 'agreement among raft nodes before linearized reading'  (duration: 549.6199ms)","trace[416421072] 'range keys from in-memory index tree'  (duration: 400.973836ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T19:35:04.168503Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T19:35:03.217673Z","time spent":"950.821676ms","remote":"127.0.0.1:41346","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4305,"request content":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-sfc85\" "}
	{"level":"warn","ts":"2024-07-19T19:35:04.168747Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"721.180245ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-19T19:35:04.168792Z","caller":"traceutil/trace.go:171","msg":"trace[251332242] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:585; }","duration":"721.246369ms","start":"2024-07-19T19:35:03.447534Z","end":"2024-07-19T19:35:04.16878Z","steps":["trace[251332242] 'agreement among raft nodes before linearized reading'  (duration: 319.793515ms)","trace[251332242] 'range keys from in-memory index tree'  (duration: 401.397634ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T19:35:04.168831Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T19:35:03.447493Z","time spent":"721.332384ms","remote":"127.0.0.1:41156","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-07-19T19:35:04.169356Z","caller":"traceutil/trace.go:171","msg":"trace[594046863] transaction","detail":"{read_only:false; response_revision:586; number_of_response:1; }","duration":"398.578882ms","start":"2024-07-19T19:35:03.770764Z","end":"2024-07-19T19:35:04.169343Z","steps":["trace[594046863] 'process raft request'  (duration: 301.447532ms)","trace[594046863] 'compare'  (duration: 95.973061ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T19:35:04.169464Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T19:35:03.770747Z","time spent":"398.678502ms","remote":"127.0.0.1:41248","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":776,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kube-scheduler-embed-certs-889918.17e3b447ef61a091\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-scheduler-embed-certs-889918.17e3b447ef61a091\" value_size:679 lease:8341951617747231911 >> failure:<>"}
	{"level":"info","ts":"2024-07-19T19:35:04.477831Z","caller":"traceutil/trace.go:171","msg":"trace[795511529] linearizableReadLoop","detail":"{readStateIndex:632; appliedIndex:631; }","duration":"298.444962ms","start":"2024-07-19T19:35:04.179366Z","end":"2024-07-19T19:35:04.477811Z","steps":["trace[795511529] 'read index received'  (duration: 265.220724ms)","trace[795511529] 'applied index is now lower than readState.Index'  (duration: 33.223679ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T19:35:04.478141Z","caller":"traceutil/trace.go:171","msg":"trace[1692419606] transaction","detail":"{read_only:false; response_revision:587; number_of_response:1; }","duration":"299.479538ms","start":"2024-07-19T19:35:04.178648Z","end":"2024-07-19T19:35:04.478127Z","steps":["trace[1692419606] 'process raft request'  (duration: 265.98541ms)","trace[1692419606] 'compare'  (duration: 33.094369ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T19:35:04.47841Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"299.028658ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-889918\" ","response":"range_response_count:1 size:5486"}
	{"level":"info","ts":"2024-07-19T19:35:04.47966Z","caller":"traceutil/trace.go:171","msg":"trace[1396925025] range","detail":"{range_begin:/registry/minions/embed-certs-889918; range_end:; response_count:1; response_revision:587; }","duration":"300.305148ms","start":"2024-07-19T19:35:04.179342Z","end":"2024-07-19T19:35:04.479647Z","steps":["trace[1396925025] 'agreement among raft nodes before linearized reading'  (duration: 298.978802ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T19:35:04.480176Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T19:35:04.17933Z","time spent":"300.826906ms","remote":"127.0.0.1:41344","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":1,"response size":5508,"request content":"key:\"/registry/minions/embed-certs-889918\" "}
	{"level":"warn","ts":"2024-07-19T19:35:04.478494Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.307987ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/extension-apiserver-authentication\" ","response":"range_response_count:1 size:3021"}
	{"level":"info","ts":"2024-07-19T19:35:04.482078Z","caller":"traceutil/trace.go:171","msg":"trace[348034883] range","detail":"{range_begin:/registry/configmaps/kube-system/extension-apiserver-authentication; range_end:; response_count:1; response_revision:587; }","duration":"129.917645ms","start":"2024-07-19T19:35:04.352146Z","end":"2024-07-19T19:35:04.482063Z","steps":["trace[348034883] 'agreement among raft nodes before linearized reading'  (duration: 126.305142ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T19:44:45.113531Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":833}
	{"level":"info","ts":"2024-07-19T19:44:45.12317Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":833,"took":"9.351791ms","hash":762878381,"current-db-size-bytes":2625536,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2625536,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-07-19T19:44:45.123223Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":762878381,"revision":833,"compact-revision":-1}
	{"level":"info","ts":"2024-07-19T19:49:45.121974Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1075}
	{"level":"info","ts":"2024-07-19T19:49:45.126716Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1075,"took":"4.356565ms","hash":3551709053,"current-db-size-bytes":2625536,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1576960,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-19T19:49:45.126787Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3551709053,"revision":1075,"compact-revision":833}
	{"level":"info","ts":"2024-07-19T19:54:45.129747Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1318}
	{"level":"info","ts":"2024-07-19T19:54:45.133709Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1318,"took":"3.633178ms","hash":1263174295,"current-db-size-bytes":2625536,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1568768,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-19T19:54:45.133758Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1263174295,"revision":1318,"compact-revision":1075}
	
	
	==> kernel <==
	 19:55:28 up 21 min,  0 users,  load average: 0.19, 0.09, 0.09
	Linux embed-certs-889918 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5] <==
	I0719 19:49:47.335299       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 19:50:47.334717       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 19:50:47.334990       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0719 19:50:47.335029       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 19:50:47.335818       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 19:50:47.335874       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0719 19:50:47.336981       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 19:52:47.335837       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 19:52:47.336236       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0719 19:52:47.336277       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 19:52:47.337546       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 19:52:47.337674       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0719 19:52:47.337694       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 19:54:46.339101       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 19:54:46.339403       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0719 19:54:47.340042       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 19:54:47.340115       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0719 19:54:47.340134       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 19:54:47.340187       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 19:54:47.340258       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0719 19:54:47.341538       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498] <==
	I0719 19:49:29.501985       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:49:58.975048       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:49:59.509448       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:50:28.979343       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:50:29.516705       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:50:58.984900       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:50:59.523854       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0719 19:51:03.978093       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="326.991µs"
	I0719 19:51:18.967610       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="132.888µs"
	E0719 19:51:28.990224       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:51:29.531134       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:51:58.995946       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:51:59.542077       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:52:29.000386       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:52:29.549484       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:52:59.005539       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:52:59.556298       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:53:29.012798       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:53:29.565761       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:53:59.018473       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:53:59.573699       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:54:29.023416       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:54:29.580734       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:54:59.028551       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:54:59.587883       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e] <==
	I0719 19:34:47.537614       1 server_linux.go:69] "Using iptables proxy"
	I0719 19:34:47.554334       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.107"]
	I0719 19:34:47.617146       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 19:34:47.617197       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 19:34:47.617215       1 server_linux.go:165] "Using iptables Proxier"
	I0719 19:34:47.619304       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 19:34:47.619487       1 server.go:872] "Version info" version="v1.30.3"
	I0719 19:34:47.619512       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 19:34:47.621884       1 config.go:192] "Starting service config controller"
	I0719 19:34:47.621953       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 19:34:47.621985       1 config.go:101] "Starting endpoint slice config controller"
	I0719 19:34:47.622005       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 19:34:47.623158       1 config.go:319] "Starting node config controller"
	I0719 19:34:47.623182       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 19:34:47.722866       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 19:34:47.722983       1 shared_informer.go:320] Caches are synced for service config
	I0719 19:34:47.723214       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e] <==
	I0719 19:35:03.560743       1 serving.go:380] Generated self-signed cert in-memory
	I0719 19:35:04.507022       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0719 19:35:04.507065       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 19:35:04.511499       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0719 19:35:04.511713       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0719 19:35:04.512032       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 19:35:04.511651       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0719 19:35:04.512260       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0719 19:35:04.511723       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0719 19:35:04.511789       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0719 19:35:04.513644       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0719 19:35:04.612148       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 19:35:04.612398       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0719 19:35:04.614516       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Jul 19 19:52:41 embed-certs-889918 kubelet[932]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 19:52:41 embed-certs-889918 kubelet[932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 19:52:41 embed-certs-889918 kubelet[932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 19:52:49 embed-certs-889918 kubelet[932]: E0719 19:52:49.954522     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sfc85" podUID="fc2a1efe-1728-42d9-bd3d-9eb29af783e9"
	Jul 19 19:53:02 embed-certs-889918 kubelet[932]: E0719 19:53:02.954307     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sfc85" podUID="fc2a1efe-1728-42d9-bd3d-9eb29af783e9"
	Jul 19 19:53:16 embed-certs-889918 kubelet[932]: E0719 19:53:16.954308     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sfc85" podUID="fc2a1efe-1728-42d9-bd3d-9eb29af783e9"
	Jul 19 19:53:27 embed-certs-889918 kubelet[932]: E0719 19:53:27.955781     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sfc85" podUID="fc2a1efe-1728-42d9-bd3d-9eb29af783e9"
	Jul 19 19:53:41 embed-certs-889918 kubelet[932]: E0719 19:53:41.968885     932 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 19:53:41 embed-certs-889918 kubelet[932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 19:53:41 embed-certs-889918 kubelet[932]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 19:53:41 embed-certs-889918 kubelet[932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 19:53:41 embed-certs-889918 kubelet[932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 19:53:42 embed-certs-889918 kubelet[932]: E0719 19:53:42.956176     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sfc85" podUID="fc2a1efe-1728-42d9-bd3d-9eb29af783e9"
	Jul 19 19:53:53 embed-certs-889918 kubelet[932]: E0719 19:53:53.954528     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sfc85" podUID="fc2a1efe-1728-42d9-bd3d-9eb29af783e9"
	Jul 19 19:54:08 embed-certs-889918 kubelet[932]: E0719 19:54:08.954700     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sfc85" podUID="fc2a1efe-1728-42d9-bd3d-9eb29af783e9"
	Jul 19 19:54:20 embed-certs-889918 kubelet[932]: E0719 19:54:20.955135     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sfc85" podUID="fc2a1efe-1728-42d9-bd3d-9eb29af783e9"
	Jul 19 19:54:32 embed-certs-889918 kubelet[932]: E0719 19:54:32.954594     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sfc85" podUID="fc2a1efe-1728-42d9-bd3d-9eb29af783e9"
	Jul 19 19:54:41 embed-certs-889918 kubelet[932]: E0719 19:54:41.968604     932 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 19:54:41 embed-certs-889918 kubelet[932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 19:54:41 embed-certs-889918 kubelet[932]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 19:54:41 embed-certs-889918 kubelet[932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 19:54:41 embed-certs-889918 kubelet[932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 19:54:46 embed-certs-889918 kubelet[932]: E0719 19:54:46.954327     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sfc85" podUID="fc2a1efe-1728-42d9-bd3d-9eb29af783e9"
	Jul 19 19:55:00 embed-certs-889918 kubelet[932]: E0719 19:55:00.954416     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sfc85" podUID="fc2a1efe-1728-42d9-bd3d-9eb29af783e9"
	Jul 19 19:55:14 embed-certs-889918 kubelet[932]: E0719 19:55:14.954185     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sfc85" podUID="fc2a1efe-1728-42d9-bd3d-9eb29af783e9"
	
	
	==> storage-provisioner [aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040] <==
	I0719 19:35:18.240073       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 19:35:18.250264       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 19:35:18.250322       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0719 19:35:35.652631       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0719 19:35:35.653971       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-889918_3154b061-c2a8-42a6-b440-462d80044f1a!
	I0719 19:35:35.654966       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b5f04683-9026-4a64-8f24-aab1220d4b9e", APIVersion:"v1", ResourceVersion:"616", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-889918_3154b061-c2a8-42a6-b440-462d80044f1a became leader
	I0719 19:35:35.754708       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-889918_3154b061-c2a8-42a6-b440-462d80044f1a!
	
	
	==> storage-provisioner [f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91] <==
	I0719 19:34:47.482338       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0719 19:35:17.489129       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-889918 -n embed-certs-889918
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-889918 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-sfc85
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-889918 describe pod metrics-server-569cc877fc-sfc85
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-889918 describe pod metrics-server-569cc877fc-sfc85: exit status 1 (64.226199ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-sfc85" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-889918 describe pod metrics-server-569cc877fc-sfc85: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (432.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (430.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-578533 -n default-k8s-diff-port-578533
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-19 19:56:27.237679753 +0000 UTC m=+6161.501442729
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-578533 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-578533 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.527µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-578533 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-578533 -n default-k8s-diff-port-578533
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-578533 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-578533 logs -n 25: (3.065035945s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p default-k8s-diff-port-578533  | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:27 UTC | 19 Jul 24 19:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:27 UTC |                     |
	|         | default-k8s-diff-port-578533                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-889918            | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:27 UTC | 19 Jul 24 19:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-889918                                  | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-971041                  | no-preload-971041            | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-971041 --memory=2200                     | no-preload-971041            | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC | 19 Jul 24 19:40 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-570369        | old-k8s-version-570369       | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-578533       | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-889918                 | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:30 UTC | 19 Jul 24 19:40 UTC |
	|         | default-k8s-diff-port-578533                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-889918                                  | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:30 UTC | 19 Jul 24 19:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-570369                              | old-k8s-version-570369       | jenkins | v1.33.1 | 19 Jul 24 19:30 UTC | 19 Jul 24 19:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-570369             | old-k8s-version-570369       | jenkins | v1.33.1 | 19 Jul 24 19:31 UTC | 19 Jul 24 19:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-570369                              | old-k8s-version-570369       | jenkins | v1.33.1 | 19 Jul 24 19:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-570369                              | old-k8s-version-570369       | jenkins | v1.33.1 | 19 Jul 24 19:54 UTC | 19 Jul 24 19:54 UTC |
	| start   | -p newest-cni-229943 --memory=2200 --alsologtostderr   | newest-cni-229943            | jenkins | v1.33.1 | 19 Jul 24 19:54 UTC | 19 Jul 24 19:55 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| delete  | -p no-preload-971041                                   | no-preload-971041            | jenkins | v1.33.1 | 19 Jul 24 19:54 UTC | 19 Jul 24 19:54 UTC |
	| start   | -p auto-179125 --memory=3072                           | auto-179125                  | jenkins | v1.33.1 | 19 Jul 24 19:54 UTC | 19 Jul 24 19:56 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p embed-certs-889918                                  | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:55 UTC | 19 Jul 24 19:55 UTC |
	| start   | -p kindnet-179125                                      | kindnet-179125               | jenkins | v1.33.1 | 19 Jul 24 19:55 UTC |                     |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-229943             | newest-cni-229943            | jenkins | v1.33.1 | 19 Jul 24 19:55 UTC | 19 Jul 24 19:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-229943                                   | newest-cni-229943            | jenkins | v1.33.1 | 19 Jul 24 19:55 UTC | 19 Jul 24 19:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-229943                  | newest-cni-229943            | jenkins | v1.33.1 | 19 Jul 24 19:55 UTC | 19 Jul 24 19:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-229943 --memory=2200 --alsologtostderr   | newest-cni-229943            | jenkins | v1.33.1 | 19 Jul 24 19:55 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| ssh     | -p auto-179125 pgrep -a                                | auto-179125                  | jenkins | v1.33.1 | 19 Jul 24 19:56 UTC | 19 Jul 24 19:56 UTC |
	|         | kubelet                                                |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 19:55:46
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 19:55:46.447551   76497 out.go:291] Setting OutFile to fd 1 ...
	I0719 19:55:46.447702   76497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:55:46.447712   76497 out.go:304] Setting ErrFile to fd 2...
	I0719 19:55:46.447719   76497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:55:46.448003   76497 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 19:55:46.448651   76497 out.go:298] Setting JSON to false
	I0719 19:55:46.449876   76497 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":9489,"bootTime":1721409457,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 19:55:46.449956   76497 start.go:139] virtualization: kvm guest
	I0719 19:55:46.470428   76497 out.go:177] * [newest-cni-229943] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 19:55:46.472006   76497 notify.go:220] Checking for updates...
	I0719 19:55:46.473521   76497 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 19:55:46.474757   76497 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 19:55:46.475978   76497 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:55:46.477143   76497 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 19:55:46.478109   76497 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 19:55:46.479149   76497 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 19:55:46.480615   76497 config.go:182] Loaded profile config "newest-cni-229943": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 19:55:46.481023   76497 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:55:46.481060   76497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:55:46.496648   76497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39171
	I0719 19:55:46.497124   76497 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:55:46.497763   76497 main.go:141] libmachine: Using API Version  1
	I0719 19:55:46.497791   76497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:55:46.498181   76497 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:55:46.498457   76497 main.go:141] libmachine: (newest-cni-229943) Calling .DriverName
	I0719 19:55:46.498740   76497 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 19:55:46.499029   76497 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:55:46.499071   76497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:55:46.514493   76497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40143
	I0719 19:55:46.514871   76497 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:55:46.515420   76497 main.go:141] libmachine: Using API Version  1
	I0719 19:55:46.515454   76497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:55:46.515806   76497 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:55:46.516041   76497 main.go:141] libmachine: (newest-cni-229943) Calling .DriverName
	I0719 19:55:46.550480   76497 out.go:177] * Using the kvm2 driver based on existing profile
	I0719 19:55:46.551529   76497 start.go:297] selected driver: kvm2
	I0719 19:55:46.551543   76497 start.go:901] validating driver "kvm2" against &{Name:newest-cni-229943 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0-beta.0 ClusterName:newest-cni-229943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.88 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_
pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:55:46.551698   76497 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 19:55:46.552701   76497 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 19:55:46.552783   76497 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19307-14841/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 19:55:46.567387   76497 install.go:137] /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0719 19:55:46.567776   76497 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0719 19:55:46.567803   76497 cni.go:84] Creating CNI manager for ""
	I0719 19:55:46.567810   76497 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:55:46.567877   76497 start.go:340] cluster config:
	{Name:newest-cni-229943 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-229943 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.88 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAdd
ress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:55:46.567984   76497 iso.go:125] acquiring lock: {Name:mka126a3710ab8a115eb2a3299b735524430e5f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 19:55:46.569846   76497 out.go:177] * Starting "newest-cni-229943" primary control-plane node in "newest-cni-229943" cluster
	I0719 19:55:45.372448   75617 out.go:204]   - Generating certificates and keys ...
	I0719 19:55:45.372587   75617 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 19:55:45.372678   75617 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 19:55:45.372774   75617 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0719 19:55:45.476578   75617 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0719 19:55:45.615716   75617 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0719 19:55:45.781658   75617 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0719 19:55:45.909736   75617 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0719 19:55:45.909922   75617 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [auto-179125 localhost] and IPs [192.168.50.146 127.0.0.1 ::1]
	I0719 19:55:46.057942   75617 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0719 19:55:46.058179   75617 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [auto-179125 localhost] and IPs [192.168.50.146 127.0.0.1 ::1]
	I0719 19:55:46.302199   75617 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0719 19:55:46.614816   75617 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0719 19:55:46.703973   75617 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0719 19:55:46.704082   75617 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 19:55:46.756744   75617 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 19:55:46.861205   75617 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 19:55:46.985958   75617 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 19:55:47.171384   75617 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 19:55:47.251645   75617 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 19:55:47.252378   75617 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 19:55:47.254831   75617 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 19:55:47.313443   75617 out.go:204]   - Booting up control plane ...
	I0719 19:55:47.313621   75617 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 19:55:47.313768   75617 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 19:55:47.313871   75617 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 19:55:47.314010   75617 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 19:55:47.314139   75617 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 19:55:47.314211   75617 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 19:55:47.406549   75617 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 19:55:47.406692   75617 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 19:55:47.907106   75617 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 500.966873ms
	I0719 19:55:47.907185   75617 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 19:55:46.510702   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:55:46.511203   76099 main.go:141] libmachine: (kindnet-179125) DBG | unable to find current IP address of domain kindnet-179125 in network mk-kindnet-179125
	I0719 19:55:46.511231   76099 main.go:141] libmachine: (kindnet-179125) DBG | I0719 19:55:46.511169   76269 retry.go:31] will retry after 1.583992302s: waiting for machine to come up
	I0719 19:55:48.096782   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:55:48.097392   76099 main.go:141] libmachine: (kindnet-179125) DBG | unable to find current IP address of domain kindnet-179125 in network mk-kindnet-179125
	I0719 19:55:48.097419   76099 main.go:141] libmachine: (kindnet-179125) DBG | I0719 19:55:48.097331   76269 retry.go:31] will retry after 2.307355379s: waiting for machine to come up
	I0719 19:55:50.407928   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:55:50.408577   76099 main.go:141] libmachine: (kindnet-179125) DBG | unable to find current IP address of domain kindnet-179125 in network mk-kindnet-179125
	I0719 19:55:50.408624   76099 main.go:141] libmachine: (kindnet-179125) DBG | I0719 19:55:50.408522   76269 retry.go:31] will retry after 2.529998615s: waiting for machine to come up
	I0719 19:55:46.571209   76497 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0719 19:55:46.571241   76497 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0719 19:55:46.571248   76497 cache.go:56] Caching tarball of preloaded images
	I0719 19:55:46.571329   76497 preload.go:172] Found /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 19:55:46.571341   76497 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0719 19:55:46.571450   76497 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943/config.json ...
	I0719 19:55:46.571665   76497 start.go:360] acquireMachinesLock for newest-cni-229943: {Name:mk45aeee801587237bb965fa260501e9fac6f233 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 19:55:52.905856   75617 kubeadm.go:310] [api-check] The API server is healthy after 5.001783937s
	I0719 19:55:52.919921   75617 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 19:55:52.930467   75617 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 19:55:52.953779   75617 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 19:55:52.954001   75617 kubeadm.go:310] [mark-control-plane] Marking the node auto-179125 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 19:55:52.963944   75617 kubeadm.go:310] [bootstrap-token] Using token: i9x115.rioxl58kkws3zf5c
	I0719 19:55:52.965217   75617 out.go:204]   - Configuring RBAC rules ...
	I0719 19:55:52.965376   75617 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 19:55:52.969268   75617 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 19:55:52.975608   75617 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 19:55:52.978398   75617 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 19:55:52.982598   75617 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 19:55:52.988413   75617 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 19:55:53.313565   75617 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 19:55:53.751943   75617 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 19:55:54.313534   75617 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 19:55:54.313569   75617 kubeadm.go:310] 
	I0719 19:55:54.313664   75617 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 19:55:54.313676   75617 kubeadm.go:310] 
	I0719 19:55:54.313775   75617 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 19:55:54.313788   75617 kubeadm.go:310] 
	I0719 19:55:54.313852   75617 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 19:55:54.313959   75617 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 19:55:54.314045   75617 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 19:55:54.314056   75617 kubeadm.go:310] 
	I0719 19:55:54.314125   75617 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 19:55:54.314134   75617 kubeadm.go:310] 
	I0719 19:55:54.314216   75617 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 19:55:54.314232   75617 kubeadm.go:310] 
	I0719 19:55:54.314324   75617 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 19:55:54.314424   75617 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 19:55:54.314510   75617 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 19:55:54.314536   75617 kubeadm.go:310] 
	I0719 19:55:54.314643   75617 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 19:55:54.314773   75617 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 19:55:54.314790   75617 kubeadm.go:310] 
	I0719 19:55:54.314916   75617 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token i9x115.rioxl58kkws3zf5c \
	I0719 19:55:54.315056   75617 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 \
	I0719 19:55:54.315090   75617 kubeadm.go:310] 	--control-plane 
	I0719 19:55:54.315099   75617 kubeadm.go:310] 
	I0719 19:55:54.315247   75617 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 19:55:54.315259   75617 kubeadm.go:310] 
	I0719 19:55:54.315369   75617 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token i9x115.rioxl58kkws3zf5c \
	I0719 19:55:54.315515   75617 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 
	I0719 19:55:54.315617   75617 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 19:55:54.315656   75617 cni.go:84] Creating CNI manager for ""
	I0719 19:55:54.315676   75617 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:55:54.317343   75617 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 19:55:54.318645   75617 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 19:55:54.328572   75617 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 19:55:54.348704   75617 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 19:55:54.348827   75617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-179125 minikube.k8s.io/updated_at=2024_07_19T19_55_54_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221 minikube.k8s.io/name=auto-179125 minikube.k8s.io/primary=true
	I0719 19:55:54.348855   75617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:55:54.377111   75617 ops.go:34] apiserver oom_adj: -16
	I0719 19:55:52.941061   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:55:52.941585   76099 main.go:141] libmachine: (kindnet-179125) DBG | unable to find current IP address of domain kindnet-179125 in network mk-kindnet-179125
	I0719 19:55:52.941613   76099 main.go:141] libmachine: (kindnet-179125) DBG | I0719 19:55:52.941547   76269 retry.go:31] will retry after 3.171112926s: waiting for machine to come up
	I0719 19:55:54.472468   75617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:55:54.973433   75617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:55:55.472980   75617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:55:55.972797   75617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:55:56.473193   75617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:55:56.972716   75617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:55:57.472774   75617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:55:57.973553   75617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:55:58.473062   75617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:55:58.973128   75617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:55:56.116361   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:55:56.116773   76099 main.go:141] libmachine: (kindnet-179125) DBG | unable to find current IP address of domain kindnet-179125 in network mk-kindnet-179125
	I0719 19:55:56.116800   76099 main.go:141] libmachine: (kindnet-179125) DBG | I0719 19:55:56.116726   76269 retry.go:31] will retry after 4.456730319s: waiting for machine to come up
	I0719 19:56:00.574992   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:56:00.575518   76099 main.go:141] libmachine: (kindnet-179125) Found IP for machine: 192.168.61.237
	I0719 19:56:00.575547   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has current primary IP address 192.168.61.237 and MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:56:00.575556   76099 main.go:141] libmachine: (kindnet-179125) Reserving static IP address...
	I0719 19:56:00.575994   76099 main.go:141] libmachine: (kindnet-179125) DBG | unable to find host DHCP lease matching {name: "kindnet-179125", mac: "52:54:00:07:d3:01", ip: "192.168.61.237"} in network mk-kindnet-179125
	I0719 19:56:00.650333   76099 main.go:141] libmachine: (kindnet-179125) DBG | Getting to WaitForSSH function...
	I0719 19:56:00.650365   76099 main.go:141] libmachine: (kindnet-179125) Reserved static IP address: 192.168.61.237
	I0719 19:56:00.650385   76099 main.go:141] libmachine: (kindnet-179125) Waiting for SSH to be available...
	I0719 19:56:00.653444   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:56:00.653875   76099 main.go:141] libmachine: (kindnet-179125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:d3:01", ip: ""} in network mk-kindnet-179125: {Iface:virbr1 ExpiryTime:2024-07-19 20:55:52 +0000 UTC Type:0 Mac:52:54:00:07:d3:01 Iaid: IPaddr:192.168.61.237 Prefix:24 Hostname:minikube Clientid:01:52:54:00:07:d3:01}
	I0719 19:56:00.653920   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined IP address 192.168.61.237 and MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:56:00.654031   76099 main.go:141] libmachine: (kindnet-179125) DBG | Using SSH client type: external
	I0719 19:56:00.654062   76099 main.go:141] libmachine: (kindnet-179125) DBG | Using SSH private key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/kindnet-179125/id_rsa (-rw-------)
	I0719 19:56:00.654097   76099 main.go:141] libmachine: (kindnet-179125) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.237 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19307-14841/.minikube/machines/kindnet-179125/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 19:56:00.654123   76099 main.go:141] libmachine: (kindnet-179125) DBG | About to run SSH command:
	I0719 19:56:00.654139   76099 main.go:141] libmachine: (kindnet-179125) DBG | exit 0
	I0719 19:56:00.783727   76099 main.go:141] libmachine: (kindnet-179125) DBG | SSH cmd err, output: <nil>: 
	I0719 19:56:00.784061   76099 main.go:141] libmachine: (kindnet-179125) KVM machine creation complete!
	I0719 19:56:00.784388   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetConfigRaw
	I0719 19:56:00.784944   76099 main.go:141] libmachine: (kindnet-179125) Calling .DriverName
	I0719 19:56:00.785160   76099 main.go:141] libmachine: (kindnet-179125) Calling .DriverName
	I0719 19:56:00.785342   76099 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0719 19:56:00.785360   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetState
	I0719 19:56:00.786682   76099 main.go:141] libmachine: Detecting operating system of created instance...
	I0719 19:56:00.786696   76099 main.go:141] libmachine: Waiting for SSH to be available...
	I0719 19:56:00.786701   76099 main.go:141] libmachine: Getting to WaitForSSH function...
	I0719 19:56:00.786707   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHHostname
	I0719 19:56:00.789206   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:56:00.789591   76099 main.go:141] libmachine: (kindnet-179125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:d3:01", ip: ""} in network mk-kindnet-179125: {Iface:virbr1 ExpiryTime:2024-07-19 20:55:52 +0000 UTC Type:0 Mac:52:54:00:07:d3:01 Iaid: IPaddr:192.168.61.237 Prefix:24 Hostname:kindnet-179125 Clientid:01:52:54:00:07:d3:01}
	I0719 19:56:00.789624   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined IP address 192.168.61.237 and MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:56:00.789727   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHPort
	I0719 19:56:00.789864   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHKeyPath
	I0719 19:56:00.790007   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHKeyPath
	I0719 19:56:00.790163   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHUsername
	I0719 19:56:00.790335   76099 main.go:141] libmachine: Using SSH client type: native
	I0719 19:56:00.790655   76099 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.237 22 <nil> <nil>}
	I0719 19:56:00.790671   76099 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0719 19:56:02.144287   76497 start.go:364] duration metric: took 15.572594207s to acquireMachinesLock for "newest-cni-229943"
	I0719 19:56:02.144351   76497 start.go:96] Skipping create...Using existing machine configuration
	I0719 19:56:02.144359   76497 fix.go:54] fixHost starting: 
	I0719 19:56:02.144752   76497 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:56:02.144797   76497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:56:02.162064   76497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34813
	I0719 19:56:02.162431   76497 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:56:02.162949   76497 main.go:141] libmachine: Using API Version  1
	I0719 19:56:02.162967   76497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:56:02.163293   76497 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:56:02.163501   76497 main.go:141] libmachine: (newest-cni-229943) Calling .DriverName
	I0719 19:56:02.163708   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetState
	I0719 19:56:02.165400   76497 fix.go:112] recreateIfNeeded on newest-cni-229943: state=Stopped err=<nil>
	I0719 19:56:02.165425   76497 main.go:141] libmachine: (newest-cni-229943) Calling .DriverName
	W0719 19:56:02.165594   76497 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 19:56:02.167907   76497 out.go:177] * Restarting existing kvm2 VM for "newest-cni-229943" ...
	I0719 19:56:00.903031   76099 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:56:00.903060   76099 main.go:141] libmachine: Detecting the provisioner...
	I0719 19:56:00.903072   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHHostname
	I0719 19:56:00.905631   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:56:00.906025   76099 main.go:141] libmachine: (kindnet-179125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:d3:01", ip: ""} in network mk-kindnet-179125: {Iface:virbr1 ExpiryTime:2024-07-19 20:55:52 +0000 UTC Type:0 Mac:52:54:00:07:d3:01 Iaid: IPaddr:192.168.61.237 Prefix:24 Hostname:kindnet-179125 Clientid:01:52:54:00:07:d3:01}
	I0719 19:56:00.906054   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined IP address 192.168.61.237 and MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:56:00.906294   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHPort
	I0719 19:56:00.906526   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHKeyPath
	I0719 19:56:00.906677   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHKeyPath
	I0719 19:56:00.906829   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHUsername
	I0719 19:56:00.907038   76099 main.go:141] libmachine: Using SSH client type: native
	I0719 19:56:00.907224   76099 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.237 22 <nil> <nil>}
	I0719 19:56:00.907238   76099 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0719 19:56:01.020701   76099 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0719 19:56:01.020776   76099 main.go:141] libmachine: found compatible host: buildroot
	I0719 19:56:01.020786   76099 main.go:141] libmachine: Provisioning with buildroot...
	I0719 19:56:01.020794   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetMachineName
	I0719 19:56:01.021059   76099 buildroot.go:166] provisioning hostname "kindnet-179125"
	I0719 19:56:01.021089   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetMachineName
	I0719 19:56:01.021286   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHHostname
	I0719 19:56:01.024096   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:56:01.024526   76099 main.go:141] libmachine: (kindnet-179125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:d3:01", ip: ""} in network mk-kindnet-179125: {Iface:virbr1 ExpiryTime:2024-07-19 20:55:52 +0000 UTC Type:0 Mac:52:54:00:07:d3:01 Iaid: IPaddr:192.168.61.237 Prefix:24 Hostname:kindnet-179125 Clientid:01:52:54:00:07:d3:01}
	I0719 19:56:01.024559   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined IP address 192.168.61.237 and MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:56:01.024687   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHPort
	I0719 19:56:01.024887   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHKeyPath
	I0719 19:56:01.025064   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHKeyPath
	I0719 19:56:01.025215   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHUsername
	I0719 19:56:01.025402   76099 main.go:141] libmachine: Using SSH client type: native
	I0719 19:56:01.025618   76099 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.237 22 <nil> <nil>}
	I0719 19:56:01.025635   76099 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-179125 && echo "kindnet-179125" | sudo tee /etc/hostname
	I0719 19:56:01.153575   76099 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-179125
	
	I0719 19:56:01.153609   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHHostname
	I0719 19:56:01.156288   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:56:01.156686   76099 main.go:141] libmachine: (kindnet-179125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:d3:01", ip: ""} in network mk-kindnet-179125: {Iface:virbr1 ExpiryTime:2024-07-19 20:55:52 +0000 UTC Type:0 Mac:52:54:00:07:d3:01 Iaid: IPaddr:192.168.61.237 Prefix:24 Hostname:kindnet-179125 Clientid:01:52:54:00:07:d3:01}
	I0719 19:56:01.156723   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined IP address 192.168.61.237 and MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:56:01.156919   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHPort
	I0719 19:56:01.157108   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHKeyPath
	I0719 19:56:01.157282   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHKeyPath
	I0719 19:56:01.157435   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHUsername
	I0719 19:56:01.157618   76099 main.go:141] libmachine: Using SSH client type: native
	I0719 19:56:01.157819   76099 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.237 22 <nil> <nil>}
	I0719 19:56:01.157838   76099 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-179125' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-179125/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-179125' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 19:56:01.277292   76099 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:56:01.277319   76099 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 19:56:01.277400   76099 buildroot.go:174] setting up certificates
	I0719 19:56:01.277468   76099 provision.go:84] configureAuth start
	I0719 19:56:01.277498   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetMachineName
	I0719 19:56:01.277800   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetIP
	I0719 19:56:01.280404   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:56:01.280717   76099 main.go:141] libmachine: (kindnet-179125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:d3:01", ip: ""} in network mk-kindnet-179125: {Iface:virbr1 ExpiryTime:2024-07-19 20:55:52 +0000 UTC Type:0 Mac:52:54:00:07:d3:01 Iaid: IPaddr:192.168.61.237 Prefix:24 Hostname:kindnet-179125 Clientid:01:52:54:00:07:d3:01}
	I0719 19:56:01.280751   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined IP address 192.168.61.237 and MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:56:01.280847   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHHostname
	I0719 19:56:01.282821   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:56:01.283189   76099 main.go:141] libmachine: (kindnet-179125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:d3:01", ip: ""} in network mk-kindnet-179125: {Iface:virbr1 ExpiryTime:2024-07-19 20:55:52 +0000 UTC Type:0 Mac:52:54:00:07:d3:01 Iaid: IPaddr:192.168.61.237 Prefix:24 Hostname:kindnet-179125 Clientid:01:52:54:00:07:d3:01}
	I0719 19:56:01.283228   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined IP address 192.168.61.237 and MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:56:01.283332   76099 provision.go:143] copyHostCerts
	I0719 19:56:01.283394   76099 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 19:56:01.283407   76099 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 19:56:01.283456   76099 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 19:56:01.283559   76099 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 19:56:01.283569   76099 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 19:56:01.283591   76099 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 19:56:01.283646   76099 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 19:56:01.283652   76099 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 19:56:01.283669   76099 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 19:56:01.283711   76099 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.kindnet-179125 san=[127.0.0.1 192.168.61.237 kindnet-179125 localhost minikube]
	I0719 19:56:01.462218   76099 provision.go:177] copyRemoteCerts
	I0719 19:56:01.462274   76099 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 19:56:01.462296   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHHostname
	I0719 19:56:01.465258   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:56:01.465682   76099 main.go:141] libmachine: (kindnet-179125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:d3:01", ip: ""} in network mk-kindnet-179125: {Iface:virbr1 ExpiryTime:2024-07-19 20:55:52 +0000 UTC Type:0 Mac:52:54:00:07:d3:01 Iaid: IPaddr:192.168.61.237 Prefix:24 Hostname:kindnet-179125 Clientid:01:52:54:00:07:d3:01}
	I0719 19:56:01.465719   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined IP address 192.168.61.237 and MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:56:01.465903   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHPort
	I0719 19:56:01.466094   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHKeyPath
	I0719 19:56:01.466248   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHUsername
	I0719 19:56:01.466363   76099 sshutil.go:53] new ssh client: &{IP:192.168.61.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/kindnet-179125/id_rsa Username:docker}
	I0719 19:56:01.555195   76099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 19:56:01.578831   76099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0719 19:56:01.602067   76099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 19:56:01.624197   76099 provision.go:87] duration metric: took 346.706363ms to configureAuth
	I0719 19:56:01.624229   76099 buildroot.go:189] setting minikube options for container-runtime
	I0719 19:56:01.624456   76099 config.go:182] Loaded profile config "kindnet-179125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:56:01.624540   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHHostname
	I0719 19:56:01.627235   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:56:01.627596   76099 main.go:141] libmachine: (kindnet-179125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:d3:01", ip: ""} in network mk-kindnet-179125: {Iface:virbr1 ExpiryTime:2024-07-19 20:55:52 +0000 UTC Type:0 Mac:52:54:00:07:d3:01 Iaid: IPaddr:192.168.61.237 Prefix:24 Hostname:kindnet-179125 Clientid:01:52:54:00:07:d3:01}
	I0719 19:56:01.627625   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined IP address 192.168.61.237 and MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:56:01.627851   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHPort
	I0719 19:56:01.628060   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHKeyPath
	I0719 19:56:01.628215   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHKeyPath
	I0719 19:56:01.628370   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHUsername
	I0719 19:56:01.628543   76099 main.go:141] libmachine: Using SSH client type: native
	I0719 19:56:01.628725   76099 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.237 22 <nil> <nil>}
	I0719 19:56:01.628738   76099 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 19:56:01.895583   76099 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 19:56:01.895612   76099 main.go:141] libmachine: Checking connection to Docker...
	I0719 19:56:01.895622   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetURL
	I0719 19:56:01.897174   76099 main.go:141] libmachine: (kindnet-179125) DBG | Using libvirt version 6000000
	I0719 19:56:01.899541   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:56:01.899935   76099 main.go:141] libmachine: (kindnet-179125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:d3:01", ip: ""} in network mk-kindnet-179125: {Iface:virbr1 ExpiryTime:2024-07-19 20:55:52 +0000 UTC Type:0 Mac:52:54:00:07:d3:01 Iaid: IPaddr:192.168.61.237 Prefix:24 Hostname:kindnet-179125 Clientid:01:52:54:00:07:d3:01}
	I0719 19:56:01.899967   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined IP address 192.168.61.237 and MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:56:01.900153   76099 main.go:141] libmachine: Docker is up and running!
	I0719 19:56:01.900172   76099 main.go:141] libmachine: Reticulating splines...
	I0719 19:56:01.900180   76099 client.go:171] duration metric: took 23.891701748s to LocalClient.Create
	I0719 19:56:01.900229   76099 start.go:167] duration metric: took 23.891783733s to libmachine.API.Create "kindnet-179125"
	I0719 19:56:01.900242   76099 start.go:293] postStartSetup for "kindnet-179125" (driver="kvm2")
	I0719 19:56:01.900255   76099 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 19:56:01.900278   76099 main.go:141] libmachine: (kindnet-179125) Calling .DriverName
	I0719 19:56:01.900585   76099 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 19:56:01.900610   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHHostname
	I0719 19:56:01.903019   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:56:01.903356   76099 main.go:141] libmachine: (kindnet-179125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:d3:01", ip: ""} in network mk-kindnet-179125: {Iface:virbr1 ExpiryTime:2024-07-19 20:55:52 +0000 UTC Type:0 Mac:52:54:00:07:d3:01 Iaid: IPaddr:192.168.61.237 Prefix:24 Hostname:kindnet-179125 Clientid:01:52:54:00:07:d3:01}
	I0719 19:56:01.903381   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined IP address 192.168.61.237 and MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:56:01.903693   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHPort
	I0719 19:56:01.903897   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHKeyPath
	I0719 19:56:01.904075   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHUsername
	I0719 19:56:01.904191   76099 sshutil.go:53] new ssh client: &{IP:192.168.61.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/kindnet-179125/id_rsa Username:docker}
	I0719 19:56:01.990119   76099 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 19:56:01.994247   76099 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 19:56:01.994271   76099 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 19:56:01.994353   76099 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 19:56:01.994442   76099 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 19:56:01.994540   76099 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 19:56:02.003448   76099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:56:02.026223   76099 start.go:296] duration metric: took 125.966932ms for postStartSetup
	I0719 19:56:02.026281   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetConfigRaw
	I0719 19:56:02.026916   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetIP
	I0719 19:56:02.029842   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:56:02.030259   76099 main.go:141] libmachine: (kindnet-179125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:d3:01", ip: ""} in network mk-kindnet-179125: {Iface:virbr1 ExpiryTime:2024-07-19 20:55:52 +0000 UTC Type:0 Mac:52:54:00:07:d3:01 Iaid: IPaddr:192.168.61.237 Prefix:24 Hostname:kindnet-179125 Clientid:01:52:54:00:07:d3:01}
	I0719 19:56:02.030296   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined IP address 192.168.61.237 and MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:56:02.030545   76099 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kindnet-179125/config.json ...
	I0719 19:56:02.030746   76099 start.go:128] duration metric: took 24.046149569s to createHost
	I0719 19:56:02.030775   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHHostname
	I0719 19:56:02.033375   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:56:02.033774   76099 main.go:141] libmachine: (kindnet-179125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:d3:01", ip: ""} in network mk-kindnet-179125: {Iface:virbr1 ExpiryTime:2024-07-19 20:55:52 +0000 UTC Type:0 Mac:52:54:00:07:d3:01 Iaid: IPaddr:192.168.61.237 Prefix:24 Hostname:kindnet-179125 Clientid:01:52:54:00:07:d3:01}
	I0719 19:56:02.033800   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined IP address 192.168.61.237 and MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:56:02.033963   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHPort
	I0719 19:56:02.034166   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHKeyPath
	I0719 19:56:02.034354   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHKeyPath
	I0719 19:56:02.034541   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHUsername
	I0719 19:56:02.034772   76099 main.go:141] libmachine: Using SSH client type: native
	I0719 19:56:02.034975   76099 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.237 22 <nil> <nil>}
	I0719 19:56:02.034989   76099 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 19:56:02.144151   76099 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721418962.126825703
	
	I0719 19:56:02.144178   76099 fix.go:216] guest clock: 1721418962.126825703
	I0719 19:56:02.144187   76099 fix.go:229] Guest: 2024-07-19 19:56:02.126825703 +0000 UTC Remote: 2024-07-19 19:56:02.030761381 +0000 UTC m=+31.212176222 (delta=96.064322ms)
	I0719 19:56:02.144208   76099 fix.go:200] guest clock delta is within tolerance: 96.064322ms
	I0719 19:56:02.144215   76099 start.go:83] releasing machines lock for "kindnet-179125", held for 24.159825866s
	I0719 19:56:02.144242   76099 main.go:141] libmachine: (kindnet-179125) Calling .DriverName
	I0719 19:56:02.144503   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetIP
	I0719 19:56:02.147621   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:56:02.148099   76099 main.go:141] libmachine: (kindnet-179125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:d3:01", ip: ""} in network mk-kindnet-179125: {Iface:virbr1 ExpiryTime:2024-07-19 20:55:52 +0000 UTC Type:0 Mac:52:54:00:07:d3:01 Iaid: IPaddr:192.168.61.237 Prefix:24 Hostname:kindnet-179125 Clientid:01:52:54:00:07:d3:01}
	I0719 19:56:02.148126   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined IP address 192.168.61.237 and MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:56:02.148323   76099 main.go:141] libmachine: (kindnet-179125) Calling .DriverName
	I0719 19:56:02.148843   76099 main.go:141] libmachine: (kindnet-179125) Calling .DriverName
	I0719 19:56:02.149062   76099 main.go:141] libmachine: (kindnet-179125) Calling .DriverName
	I0719 19:56:02.149164   76099 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 19:56:02.149207   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHHostname
	I0719 19:56:02.149334   76099 ssh_runner.go:195] Run: cat /version.json
	I0719 19:56:02.149359   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHHostname
	I0719 19:56:02.151995   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:56:02.152412   76099 main.go:141] libmachine: (kindnet-179125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:d3:01", ip: ""} in network mk-kindnet-179125: {Iface:virbr1 ExpiryTime:2024-07-19 20:55:52 +0000 UTC Type:0 Mac:52:54:00:07:d3:01 Iaid: IPaddr:192.168.61.237 Prefix:24 Hostname:kindnet-179125 Clientid:01:52:54:00:07:d3:01}
	I0719 19:56:02.152433   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined IP address 192.168.61.237 and MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:56:02.152458   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:56:02.152568   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHPort
	I0719 19:56:02.152738   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHKeyPath
	I0719 19:56:02.152899   76099 main.go:141] libmachine: (kindnet-179125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:d3:01", ip: ""} in network mk-kindnet-179125: {Iface:virbr1 ExpiryTime:2024-07-19 20:55:52 +0000 UTC Type:0 Mac:52:54:00:07:d3:01 Iaid: IPaddr:192.168.61.237 Prefix:24 Hostname:kindnet-179125 Clientid:01:52:54:00:07:d3:01}
	I0719 19:56:02.152925   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined IP address 192.168.61.237 and MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:56:02.152957   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHUsername
	I0719 19:56:02.153119   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHPort
	I0719 19:56:02.153126   76099 sshutil.go:53] new ssh client: &{IP:192.168.61.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/kindnet-179125/id_rsa Username:docker}
	I0719 19:56:02.153288   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHKeyPath
	I0719 19:56:02.153454   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetSSHUsername
	I0719 19:56:02.153608   76099 sshutil.go:53] new ssh client: &{IP:192.168.61.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/kindnet-179125/id_rsa Username:docker}
	I0719 19:56:02.277964   76099 ssh_runner.go:195] Run: systemctl --version
	I0719 19:56:02.284116   76099 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 19:56:02.444933   76099 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 19:56:02.450674   76099 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 19:56:02.450736   76099 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 19:56:02.466207   76099 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 19:56:02.466229   76099 start.go:495] detecting cgroup driver to use...
	I0719 19:56:02.466286   76099 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 19:56:02.482871   76099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 19:56:02.501068   76099 docker.go:217] disabling cri-docker service (if available) ...
	I0719 19:56:02.501127   76099 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 19:56:02.515609   76099 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 19:56:02.533913   76099 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 19:56:02.666943   76099 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 19:56:02.825121   76099 docker.go:233] disabling docker service ...
	I0719 19:56:02.825184   76099 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 19:56:02.840112   76099 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 19:56:02.854798   76099 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 19:56:03.006433   76099 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 19:56:03.171657   76099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 19:56:03.187549   76099 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 19:56:03.205533   76099 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 19:56:03.205612   76099 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:56:03.215211   76099 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 19:56:03.215277   76099 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:56:03.225066   76099 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:56:03.234801   76099 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:56:03.245095   76099 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 19:56:03.255575   76099 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:56:03.266104   76099 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:56:03.283092   76099 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:56:03.299591   76099 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 19:56:03.308868   76099 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 19:56:03.308949   76099 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 19:56:03.321502   76099 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 19:56:03.333481   76099 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:56:03.462467   76099 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 19:56:03.614141   76099 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 19:56:03.614267   76099 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 19:56:03.619001   76099 start.go:563] Will wait 60s for crictl version
	I0719 19:56:03.619054   76099 ssh_runner.go:195] Run: which crictl
	I0719 19:56:03.622633   76099 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 19:56:03.659912   76099 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 19:56:03.660007   76099 ssh_runner.go:195] Run: crio --version
	I0719 19:56:03.690078   76099 ssh_runner.go:195] Run: crio --version
	I0719 19:56:03.721137   76099 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 19:55:59.473290   75617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:55:59.972920   75617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:56:00.473535   75617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:56:00.973032   75617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:56:01.473037   75617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:56:01.973442   75617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:56:02.473087   75617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:56:02.972552   75617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:56:03.473030   75617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:56:03.973040   75617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:56:03.722302   76099 main.go:141] libmachine: (kindnet-179125) Calling .GetIP
	I0719 19:56:03.725743   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:56:03.726247   76099 main.go:141] libmachine: (kindnet-179125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:d3:01", ip: ""} in network mk-kindnet-179125: {Iface:virbr1 ExpiryTime:2024-07-19 20:55:52 +0000 UTC Type:0 Mac:52:54:00:07:d3:01 Iaid: IPaddr:192.168.61.237 Prefix:24 Hostname:kindnet-179125 Clientid:01:52:54:00:07:d3:01}
	I0719 19:56:03.726323   76099 main.go:141] libmachine: (kindnet-179125) DBG | domain kindnet-179125 has defined IP address 192.168.61.237 and MAC address 52:54:00:07:d3:01 in network mk-kindnet-179125
	I0719 19:56:03.726535   76099 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0719 19:56:03.731711   76099 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:56:03.744282   76099 kubeadm.go:883] updating cluster {Name:kindnet-179125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:kindnet-179125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.61.237 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 19:56:03.744412   76099 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 19:56:03.744453   76099 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:56:03.785661   76099 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0719 19:56:03.785732   76099 ssh_runner.go:195] Run: which lz4
	I0719 19:56:03.789653   76099 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 19:56:03.794696   76099 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 19:56:03.794730   76099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0719 19:56:05.098118   76099 crio.go:462] duration metric: took 1.308505184s to copy over tarball
	I0719 19:56:05.098202   76099 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 19:56:02.169374   76497 main.go:141] libmachine: (newest-cni-229943) Calling .Start
	I0719 19:56:02.169580   76497 main.go:141] libmachine: (newest-cni-229943) Ensuring networks are active...
	I0719 19:56:02.170336   76497 main.go:141] libmachine: (newest-cni-229943) Ensuring network default is active
	I0719 19:56:02.170880   76497 main.go:141] libmachine: (newest-cni-229943) Ensuring network mk-newest-cni-229943 is active
	I0719 19:56:02.171276   76497 main.go:141] libmachine: (newest-cni-229943) Getting domain xml...
	I0719 19:56:02.171995   76497 main.go:141] libmachine: (newest-cni-229943) Creating domain...
	I0719 19:56:03.499559   76497 main.go:141] libmachine: (newest-cni-229943) Waiting to get IP...
	I0719 19:56:03.500437   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:03.500884   76497 main.go:141] libmachine: (newest-cni-229943) DBG | unable to find current IP address of domain newest-cni-229943 in network mk-newest-cni-229943
	I0719 19:56:03.500961   76497 main.go:141] libmachine: (newest-cni-229943) DBG | I0719 19:56:03.500864   76616 retry.go:31] will retry after 216.932625ms: waiting for machine to come up
	I0719 19:56:03.719485   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:03.720074   76497 main.go:141] libmachine: (newest-cni-229943) DBG | unable to find current IP address of domain newest-cni-229943 in network mk-newest-cni-229943
	I0719 19:56:03.720099   76497 main.go:141] libmachine: (newest-cni-229943) DBG | I0719 19:56:03.720026   76616 retry.go:31] will retry after 288.395367ms: waiting for machine to come up
	I0719 19:56:04.010525   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:04.011219   76497 main.go:141] libmachine: (newest-cni-229943) DBG | unable to find current IP address of domain newest-cni-229943 in network mk-newest-cni-229943
	I0719 19:56:04.011252   76497 main.go:141] libmachine: (newest-cni-229943) DBG | I0719 19:56:04.011164   76616 retry.go:31] will retry after 372.633565ms: waiting for machine to come up
	I0719 19:56:04.385944   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:04.386632   76497 main.go:141] libmachine: (newest-cni-229943) DBG | unable to find current IP address of domain newest-cni-229943 in network mk-newest-cni-229943
	I0719 19:56:04.386663   76497 main.go:141] libmachine: (newest-cni-229943) DBG | I0719 19:56:04.386578   76616 retry.go:31] will retry after 543.147991ms: waiting for machine to come up
	I0719 19:56:04.931319   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:04.931906   76497 main.go:141] libmachine: (newest-cni-229943) DBG | unable to find current IP address of domain newest-cni-229943 in network mk-newest-cni-229943
	I0719 19:56:04.931929   76497 main.go:141] libmachine: (newest-cni-229943) DBG | I0719 19:56:04.931852   76616 retry.go:31] will retry after 724.820688ms: waiting for machine to come up
	I0719 19:56:05.658531   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:05.659089   76497 main.go:141] libmachine: (newest-cni-229943) DBG | unable to find current IP address of domain newest-cni-229943 in network mk-newest-cni-229943
	I0719 19:56:05.659125   76497 main.go:141] libmachine: (newest-cni-229943) DBG | I0719 19:56:05.659013   76616 retry.go:31] will retry after 623.123013ms: waiting for machine to come up
	I0719 19:56:06.283615   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:06.284328   76497 main.go:141] libmachine: (newest-cni-229943) DBG | unable to find current IP address of domain newest-cni-229943 in network mk-newest-cni-229943
	I0719 19:56:06.284359   76497 main.go:141] libmachine: (newest-cni-229943) DBG | I0719 19:56:06.284254   76616 retry.go:31] will retry after 821.222494ms: waiting for machine to come up
	I0719 19:56:04.473191   75617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:56:04.973496   75617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:56:05.472540   75617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:56:05.973310   75617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:56:06.472679   75617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:56:06.972803   75617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:56:07.473102   75617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:56:07.973049   75617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:56:08.150950   75617 kubeadm.go:1113] duration metric: took 13.802147342s to wait for elevateKubeSystemPrivileges
	I0719 19:56:08.150985   75617 kubeadm.go:394] duration metric: took 23.532852528s to StartCluster
	I0719 19:56:08.151009   75617 settings.go:142] acquiring lock: {Name:mkcf95271f2ac91a55c2f4ae0f8a0ccaa24777be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:56:08.151094   75617 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:56:08.152483   75617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/kubeconfig: {Name:mk0bbb5f0de0e816a151363ad4427bbf9de52f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:56:08.152916   75617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0719 19:56:08.152929   75617 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.146 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 19:56:08.153192   75617 config.go:182] Loaded profile config "auto-179125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:56:08.153226   75617 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 19:56:08.153284   75617 addons.go:69] Setting storage-provisioner=true in profile "auto-179125"
	I0719 19:56:08.153309   75617 addons.go:234] Setting addon storage-provisioner=true in "auto-179125"
	I0719 19:56:08.153333   75617 host.go:66] Checking if "auto-179125" exists ...
	I0719 19:56:08.153387   75617 addons.go:69] Setting default-storageclass=true in profile "auto-179125"
	I0719 19:56:08.153421   75617 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-179125"
	I0719 19:56:08.153870   75617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:56:08.153891   75617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:56:08.154232   75617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:56:08.154261   75617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:56:08.154480   75617 out.go:177] * Verifying Kubernetes components...
	I0719 19:56:08.155960   75617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:56:08.172050   75617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32921
	I0719 19:56:08.172572   75617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:56:08.173086   75617 main.go:141] libmachine: Using API Version  1
	I0719 19:56:08.173107   75617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:56:08.173515   75617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:56:08.173696   75617 main.go:141] libmachine: (auto-179125) Calling .GetState
	I0719 19:56:08.174375   75617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41177
	I0719 19:56:08.174789   75617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:56:08.175426   75617 main.go:141] libmachine: Using API Version  1
	I0719 19:56:08.175460   75617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:56:08.177059   75617 addons.go:234] Setting addon default-storageclass=true in "auto-179125"
	I0719 19:56:08.177091   75617 host.go:66] Checking if "auto-179125" exists ...
	I0719 19:56:08.177381   75617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:56:08.177417   75617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:56:08.177848   75617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:56:08.178457   75617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:56:08.178494   75617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:56:08.193308   75617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34751
	I0719 19:56:08.193702   75617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:56:08.194203   75617 main.go:141] libmachine: Using API Version  1
	I0719 19:56:08.194219   75617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:56:08.194519   75617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:56:08.194990   75617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:56:08.195026   75617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:56:08.198606   75617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34425
	I0719 19:56:08.199368   75617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:56:08.199957   75617 main.go:141] libmachine: Using API Version  1
	I0719 19:56:08.199977   75617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:56:08.203931   75617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:56:08.204135   75617 main.go:141] libmachine: (auto-179125) Calling .GetState
	I0719 19:56:08.207949   75617 main.go:141] libmachine: (auto-179125) Calling .DriverName
	I0719 19:56:08.215943   75617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36617
	I0719 19:56:08.216497   75617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:56:08.217050   75617 main.go:141] libmachine: Using API Version  1
	I0719 19:56:08.217075   75617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:56:08.217497   75617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:56:08.217740   75617 main.go:141] libmachine: (auto-179125) Calling .GetState
	I0719 19:56:08.219476   75617 main.go:141] libmachine: (auto-179125) Calling .DriverName
	I0719 19:56:08.219828   75617 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 19:56:08.219868   75617 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 19:56:08.219887   75617 main.go:141] libmachine: (auto-179125) Calling .GetSSHHostname
	I0719 19:56:08.222700   75617 main.go:141] libmachine: (auto-179125) DBG | domain auto-179125 has defined MAC address 52:54:00:c8:d9:03 in network mk-auto-179125
	I0719 19:56:08.223171   75617 main.go:141] libmachine: (auto-179125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:d9:03", ip: ""} in network mk-auto-179125: {Iface:virbr3 ExpiryTime:2024-07-19 20:55:27 +0000 UTC Type:0 Mac:52:54:00:c8:d9:03 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:auto-179125 Clientid:01:52:54:00:c8:d9:03}
	I0719 19:56:08.223185   75617 main.go:141] libmachine: (auto-179125) DBG | domain auto-179125 has defined IP address 192.168.50.146 and MAC address 52:54:00:c8:d9:03 in network mk-auto-179125
	I0719 19:56:08.223424   75617 main.go:141] libmachine: (auto-179125) Calling .GetSSHPort
	I0719 19:56:08.223578   75617 main.go:141] libmachine: (auto-179125) Calling .GetSSHKeyPath
	I0719 19:56:08.223741   75617 main.go:141] libmachine: (auto-179125) Calling .GetSSHUsername
	I0719 19:56:08.223886   75617 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/auto-179125/id_rsa Username:docker}
	I0719 19:56:08.231783   75617 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:56:08.249880   75617 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:56:08.249908   75617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 19:56:08.249933   75617 main.go:141] libmachine: (auto-179125) Calling .GetSSHHostname
	I0719 19:56:08.253774   75617 main.go:141] libmachine: (auto-179125) DBG | domain auto-179125 has defined MAC address 52:54:00:c8:d9:03 in network mk-auto-179125
	I0719 19:56:08.254283   75617 main.go:141] libmachine: (auto-179125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:d9:03", ip: ""} in network mk-auto-179125: {Iface:virbr3 ExpiryTime:2024-07-19 20:55:27 +0000 UTC Type:0 Mac:52:54:00:c8:d9:03 Iaid: IPaddr:192.168.50.146 Prefix:24 Hostname:auto-179125 Clientid:01:52:54:00:c8:d9:03}
	I0719 19:56:08.254310   75617 main.go:141] libmachine: (auto-179125) DBG | domain auto-179125 has defined IP address 192.168.50.146 and MAC address 52:54:00:c8:d9:03 in network mk-auto-179125
	I0719 19:56:08.254566   75617 main.go:141] libmachine: (auto-179125) Calling .GetSSHPort
	I0719 19:56:08.254801   75617 main.go:141] libmachine: (auto-179125) Calling .GetSSHKeyPath
	I0719 19:56:08.254962   75617 main.go:141] libmachine: (auto-179125) Calling .GetSSHUsername
	I0719 19:56:08.255107   75617 sshutil.go:53] new ssh client: &{IP:192.168.50.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/auto-179125/id_rsa Username:docker}
	I0719 19:56:08.411748   75617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:56:08.412117   75617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0719 19:56:08.443866   75617 node_ready.go:35] waiting up to 15m0s for node "auto-179125" to be "Ready" ...
	I0719 19:56:08.453629   75617 node_ready.go:49] node "auto-179125" has status "Ready":"True"
	I0719 19:56:08.453660   75617 node_ready.go:38] duration metric: took 9.764316ms for node "auto-179125" to be "Ready" ...
	I0719 19:56:08.453671   75617 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:56:08.470370   75617 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-d99nr" in "kube-system" namespace to be "Ready" ...
	I0719 19:56:08.589160   75617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 19:56:08.592012   75617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:56:07.604561   76099 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.506323216s)
	I0719 19:56:07.604593   76099 crio.go:469] duration metric: took 2.506445467s to extract the tarball
	I0719 19:56:07.604602   76099 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 19:56:07.644920   76099 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:56:07.692934   76099 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 19:56:07.692958   76099 cache_images.go:84] Images are preloaded, skipping loading
	I0719 19:56:07.692970   76099 kubeadm.go:934] updating node { 192.168.61.237 8443 v1.30.3 crio true true} ...
	I0719 19:56:07.693113   76099 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-179125 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.237
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:kindnet-179125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0719 19:56:07.693198   76099 ssh_runner.go:195] Run: crio config
	I0719 19:56:07.750137   76099 cni.go:84] Creating CNI manager for "kindnet"
	I0719 19:56:07.750163   76099 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 19:56:07.750190   76099 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.237 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-179125 NodeName:kindnet-179125 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.237"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.237 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 19:56:07.750393   76099 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.237
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-179125"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.237
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.237"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 19:56:07.750486   76099 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 19:56:07.760859   76099 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 19:56:07.760934   76099 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 19:56:07.770774   76099 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0719 19:56:07.788648   76099 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 19:56:07.806863   76099 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0719 19:56:07.824949   76099 ssh_runner.go:195] Run: grep 192.168.61.237	control-plane.minikube.internal$ /etc/hosts
	I0719 19:56:07.828903   76099 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.237	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:56:07.842456   76099 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:56:07.983786   76099 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:56:08.003564   76099 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kindnet-179125 for IP: 192.168.61.237
	I0719 19:56:08.003593   76099 certs.go:194] generating shared ca certs ...
	I0719 19:56:08.003614   76099 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:56:08.003782   76099 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 19:56:08.003864   76099 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 19:56:08.003879   76099 certs.go:256] generating profile certs ...
	I0719 19:56:08.003947   76099 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kindnet-179125/client.key
	I0719 19:56:08.003965   76099 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kindnet-179125/client.crt with IP's: []
	I0719 19:56:08.124417   76099 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kindnet-179125/client.crt ...
	I0719 19:56:08.124447   76099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kindnet-179125/client.crt: {Name:mk4acc7e58a8cf62e278ee024d30c1ace8f6a46c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:56:08.124621   76099 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kindnet-179125/client.key ...
	I0719 19:56:08.124651   76099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kindnet-179125/client.key: {Name:mka9bc3cade1a7daf4c22faaa2722019a9f31ef2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:56:08.124769   76099 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kindnet-179125/apiserver.key.0ea21d1a
	I0719 19:56:08.124788   76099 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kindnet-179125/apiserver.crt.0ea21d1a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.237]
	I0719 19:56:08.531814   76099 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kindnet-179125/apiserver.crt.0ea21d1a ...
	I0719 19:56:08.531865   76099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kindnet-179125/apiserver.crt.0ea21d1a: {Name:mkf606c5eccc932c2df43231634f28d3bc8215b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:56:08.553336   76099 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kindnet-179125/apiserver.key.0ea21d1a ...
	I0719 19:56:08.553380   76099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kindnet-179125/apiserver.key.0ea21d1a: {Name:mk3c05ace871bf5e72de4cee90d1c508ce73ce3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:56:08.553696   76099 certs.go:381] copying /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kindnet-179125/apiserver.crt.0ea21d1a -> /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kindnet-179125/apiserver.crt
	I0719 19:56:08.553845   76099 certs.go:385] copying /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kindnet-179125/apiserver.key.0ea21d1a -> /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kindnet-179125/apiserver.key
	I0719 19:56:08.553941   76099 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kindnet-179125/proxy-client.key
	I0719 19:56:08.553968   76099 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kindnet-179125/proxy-client.crt with IP's: []
	I0719 19:56:08.661090   76099 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kindnet-179125/proxy-client.crt ...
	I0719 19:56:08.661122   76099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kindnet-179125/proxy-client.crt: {Name:mk457cf82950a8fec6e7a3752f282e5b1c3ed373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:56:08.679429   76099 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kindnet-179125/proxy-client.key ...
	I0719 19:56:08.679467   76099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kindnet-179125/proxy-client.key: {Name:mk6ae817964be17c501da427467f6a592978a788 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:56:08.679812   76099 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 19:56:08.679944   76099 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 19:56:08.679966   76099 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 19:56:08.680003   76099 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 19:56:08.680035   76099 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 19:56:08.680066   76099 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 19:56:08.680129   76099 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:56:08.680993   76099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 19:56:08.755456   76099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 19:56:08.795582   76099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 19:56:08.827882   76099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 19:56:08.856970   76099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kindnet-179125/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0719 19:56:08.889698   76099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kindnet-179125/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 19:56:08.920279   76099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kindnet-179125/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 19:56:08.981127   76099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/kindnet-179125/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 19:56:09.016655   76099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 19:56:09.047602   76099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 19:56:09.078199   76099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 19:56:09.108710   76099 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 19:56:09.127898   76099 ssh_runner.go:195] Run: openssl version
	I0719 19:56:09.133933   76099 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 19:56:09.145567   76099 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:56:09.150025   76099 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:56:09.150092   76099 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:56:09.156302   76099 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 19:56:09.167505   76099 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 19:56:09.181928   76099 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 19:56:09.186447   76099 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 19:56:09.186509   76099 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 19:56:09.192189   76099 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 19:56:09.203022   76099 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 19:56:09.217125   76099 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 19:56:09.222811   76099 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 19:56:09.222873   76099 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 19:56:09.230251   76099 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 19:56:09.244134   76099 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 19:56:09.248903   76099 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 19:56:09.248962   76099 kubeadm.go:392] StartCluster: {Name:kindnet-179125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:kindnet-179125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.61.237 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:56:09.249059   76099 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 19:56:09.249151   76099 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:56:09.292368   76099 cri.go:89] found id: ""
	I0719 19:56:09.292448   76099 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 19:56:09.307398   76099 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:56:09.319615   76099 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:56:09.329274   76099 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:56:09.329300   76099 kubeadm.go:157] found existing configuration files:
	
	I0719 19:56:09.329367   76099 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:56:09.340787   76099 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:56:09.340863   76099 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:56:09.349857   76099 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:56:09.358772   76099 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:56:09.358826   76099 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:56:09.367287   76099 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:56:09.375583   76099 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:56:09.375643   76099 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:56:09.384932   76099 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:56:09.396047   76099 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:56:09.396120   76099 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:56:09.406875   76099 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 19:56:09.482283   76099 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0719 19:56:09.482361   76099 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 19:56:09.602100   76099 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 19:56:09.602291   76099 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 19:56:09.602439   76099 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 19:56:09.812386   76099 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 19:56:10.011939   76099 out.go:204]   - Generating certificates and keys ...
	I0719 19:56:10.012085   76099 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 19:56:10.012202   76099 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 19:56:10.012296   76099 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0719 19:56:10.127254   76099 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0719 19:56:10.464556   76099 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0719 19:56:10.578068   76099 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0719 19:56:10.709920   76099 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0719 19:56:10.710147   76099 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kindnet-179125 localhost] and IPs [192.168.61.237 127.0.0.1 ::1]
	I0719 19:56:10.785717   76099 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0719 19:56:10.785887   76099 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kindnet-179125 localhost] and IPs [192.168.61.237 127.0.0.1 ::1]
	I0719 19:56:09.574200   75617 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.162041005s)
	I0719 19:56:09.574286   75617 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0719 19:56:09.574236   75617 main.go:141] libmachine: Making call to close driver server
	I0719 19:56:09.574340   75617 main.go:141] libmachine: (auto-179125) Calling .Close
	I0719 19:56:09.574677   75617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:56:09.574701   75617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:56:09.574717   75617 main.go:141] libmachine: Making call to close driver server
	I0719 19:56:09.574731   75617 main.go:141] libmachine: (auto-179125) Calling .Close
	I0719 19:56:09.574963   75617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:56:09.574977   75617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:56:10.867794   75617 main.go:141] libmachine: Making call to close driver server
	I0719 19:56:10.867819   75617 main.go:141] libmachine: (auto-179125) Calling .Close
	I0719 19:56:10.868125   75617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:56:10.868146   75617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:56:10.875199   75617 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-179125" context rescaled to 1 replicas
	I0719 19:56:10.876847   75617 pod_ready.go:102] pod "coredns-7db6d8ff4d-d99nr" in "kube-system" namespace has status "Ready":"False"
	I0719 19:56:10.977855   75617 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.385804692s)
	I0719 19:56:10.977903   75617 main.go:141] libmachine: Making call to close driver server
	I0719 19:56:10.977914   75617 main.go:141] libmachine: (auto-179125) Calling .Close
	I0719 19:56:10.979826   75617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:56:10.979849   75617 main.go:141] libmachine: (auto-179125) DBG | Closing plugin on server side
	I0719 19:56:10.979858   75617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:56:10.979869   75617 main.go:141] libmachine: Making call to close driver server
	I0719 19:56:10.979877   75617 main.go:141] libmachine: (auto-179125) Calling .Close
	I0719 19:56:10.980209   75617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:56:10.980226   75617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:56:10.980240   75617 main.go:141] libmachine: (auto-179125) DBG | Closing plugin on server side
	I0719 19:56:10.982499   75617 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0719 19:56:07.107039   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:07.107606   76497 main.go:141] libmachine: (newest-cni-229943) DBG | unable to find current IP address of domain newest-cni-229943 in network mk-newest-cni-229943
	I0719 19:56:07.107625   76497 main.go:141] libmachine: (newest-cni-229943) DBG | I0719 19:56:07.107534   76616 retry.go:31] will retry after 1.044669241s: waiting for machine to come up
	I0719 19:56:08.153647   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:08.154431   76497 main.go:141] libmachine: (newest-cni-229943) DBG | unable to find current IP address of domain newest-cni-229943 in network mk-newest-cni-229943
	I0719 19:56:08.154452   76497 main.go:141] libmachine: (newest-cni-229943) DBG | I0719 19:56:08.154339   76616 retry.go:31] will retry after 1.468305878s: waiting for machine to come up
	I0719 19:56:09.625250   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:09.625866   76497 main.go:141] libmachine: (newest-cni-229943) DBG | unable to find current IP address of domain newest-cni-229943 in network mk-newest-cni-229943
	I0719 19:56:09.625916   76497 main.go:141] libmachine: (newest-cni-229943) DBG | I0719 19:56:09.625845   76616 retry.go:31] will retry after 1.401443306s: waiting for machine to come up
	I0719 19:56:11.029218   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:11.029755   76497 main.go:141] libmachine: (newest-cni-229943) DBG | unable to find current IP address of domain newest-cni-229943 in network mk-newest-cni-229943
	I0719 19:56:11.029790   76497 main.go:141] libmachine: (newest-cni-229943) DBG | I0719 19:56:11.029695   76616 retry.go:31] will retry after 2.559889129s: waiting for machine to come up
	I0719 19:56:10.975612   76099 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0719 19:56:11.039799   76099 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0719 19:56:11.223558   76099 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0719 19:56:11.223676   76099 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 19:56:11.469855   76099 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 19:56:11.680688   76099 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 19:56:11.882081   76099 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 19:56:12.003011   76099 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 19:56:12.238555   76099 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 19:56:12.239264   76099 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 19:56:12.243500   76099 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 19:56:10.983852   75617 addons.go:510] duration metric: took 2.830604034s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0719 19:56:11.977902   75617 pod_ready.go:92] pod "coredns-7db6d8ff4d-d99nr" in "kube-system" namespace has status "Ready":"True"
	I0719 19:56:11.977932   75617 pod_ready.go:81] duration metric: took 3.507531947s for pod "coredns-7db6d8ff4d-d99nr" in "kube-system" namespace to be "Ready" ...
	I0719 19:56:11.977945   75617 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-lxh2j" in "kube-system" namespace to be "Ready" ...
	I0719 19:56:11.984959   75617 pod_ready.go:92] pod "coredns-7db6d8ff4d-lxh2j" in "kube-system" namespace has status "Ready":"True"
	I0719 19:56:11.984982   75617 pod_ready.go:81] duration metric: took 7.028651ms for pod "coredns-7db6d8ff4d-lxh2j" in "kube-system" namespace to be "Ready" ...
	I0719 19:56:11.984997   75617 pod_ready.go:78] waiting up to 15m0s for pod "etcd-auto-179125" in "kube-system" namespace to be "Ready" ...
	I0719 19:56:11.989690   75617 pod_ready.go:92] pod "etcd-auto-179125" in "kube-system" namespace has status "Ready":"True"
	I0719 19:56:11.989712   75617 pod_ready.go:81] duration metric: took 4.705798ms for pod "etcd-auto-179125" in "kube-system" namespace to be "Ready" ...
	I0719 19:56:11.989724   75617 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-auto-179125" in "kube-system" namespace to be "Ready" ...
	I0719 19:56:11.994947   75617 pod_ready.go:92] pod "kube-apiserver-auto-179125" in "kube-system" namespace has status "Ready":"True"
	I0719 19:56:11.994972   75617 pod_ready.go:81] duration metric: took 5.238639ms for pod "kube-apiserver-auto-179125" in "kube-system" namespace to be "Ready" ...
	I0719 19:56:11.994986   75617 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-auto-179125" in "kube-system" namespace to be "Ready" ...
	I0719 19:56:12.080635   75617 pod_ready.go:92] pod "kube-controller-manager-auto-179125" in "kube-system" namespace has status "Ready":"True"
	I0719 19:56:12.080660   75617 pod_ready.go:81] duration metric: took 85.665425ms for pod "kube-controller-manager-auto-179125" in "kube-system" namespace to be "Ready" ...
	I0719 19:56:12.080673   75617 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-6z697" in "kube-system" namespace to be "Ready" ...
	I0719 19:56:12.482718   75617 pod_ready.go:92] pod "kube-proxy-6z697" in "kube-system" namespace has status "Ready":"True"
	I0719 19:56:12.482751   75617 pod_ready.go:81] duration metric: took 402.069521ms for pod "kube-proxy-6z697" in "kube-system" namespace to be "Ready" ...
	I0719 19:56:12.482765   75617 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-auto-179125" in "kube-system" namespace to be "Ready" ...
	I0719 19:56:12.881239   75617 pod_ready.go:92] pod "kube-scheduler-auto-179125" in "kube-system" namespace has status "Ready":"True"
	I0719 19:56:12.881266   75617 pod_ready.go:81] duration metric: took 398.493887ms for pod "kube-scheduler-auto-179125" in "kube-system" namespace to be "Ready" ...
	I0719 19:56:12.881274   75617 pod_ready.go:38] duration metric: took 4.427590891s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:56:12.881287   75617 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:56:12.881359   75617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:56:12.901056   75617 api_server.go:72] duration metric: took 4.748092261s to wait for apiserver process to appear ...
	I0719 19:56:12.901087   75617 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:56:12.901110   75617 api_server.go:253] Checking apiserver healthz at https://192.168.50.146:8443/healthz ...
	I0719 19:56:12.908548   75617 api_server.go:279] https://192.168.50.146:8443/healthz returned 200:
	ok
	I0719 19:56:12.909957   75617 api_server.go:141] control plane version: v1.30.3
	I0719 19:56:12.909983   75617 api_server.go:131] duration metric: took 8.890089ms to wait for apiserver health ...
	I0719 19:56:12.909991   75617 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:56:13.084267   75617 system_pods.go:59] 8 kube-system pods found
	I0719 19:56:13.084311   75617 system_pods.go:61] "coredns-7db6d8ff4d-d99nr" [6b3fca7a-819f-40d9-923b-bd88a046953e] Running
	I0719 19:56:13.084319   75617 system_pods.go:61] "coredns-7db6d8ff4d-lxh2j" [2e7c6e5c-0e23-42cc-886a-a3b45be30d75] Running
	I0719 19:56:13.084325   75617 system_pods.go:61] "etcd-auto-179125" [a7474c4c-8b3e-436a-95db-cc3e298511a7] Running
	I0719 19:56:13.084330   75617 system_pods.go:61] "kube-apiserver-auto-179125" [ddaccc3e-a98c-4267-9b05-ccc17aeb155e] Running
	I0719 19:56:13.084335   75617 system_pods.go:61] "kube-controller-manager-auto-179125" [2ca28f39-8ebd-4e54-9838-9d7ed16ed48d] Running
	I0719 19:56:13.084340   75617 system_pods.go:61] "kube-proxy-6z697" [1f70aabc-95bc-47cd-910b-d5db3c77346a] Running
	I0719 19:56:13.084344   75617 system_pods.go:61] "kube-scheduler-auto-179125" [23bee4af-c02a-4068-98f3-d84ed0dcb6c8] Running
	I0719 19:56:13.084348   75617 system_pods.go:61] "storage-provisioner" [4a136357-469d-4fb5-9c89-f4a99666b593] Running
	I0719 19:56:13.084355   75617 system_pods.go:74] duration metric: took 174.358532ms to wait for pod list to return data ...
	I0719 19:56:13.084364   75617 default_sa.go:34] waiting for default service account to be created ...
	I0719 19:56:13.279943   75617 default_sa.go:45] found service account: "default"
	I0719 19:56:13.279972   75617 default_sa.go:55] duration metric: took 195.600791ms for default service account to be created ...
	I0719 19:56:13.279983   75617 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 19:56:13.485998   75617 system_pods.go:86] 8 kube-system pods found
	I0719 19:56:13.486036   75617 system_pods.go:89] "coredns-7db6d8ff4d-d99nr" [6b3fca7a-819f-40d9-923b-bd88a046953e] Running
	I0719 19:56:13.486046   75617 system_pods.go:89] "coredns-7db6d8ff4d-lxh2j" [2e7c6e5c-0e23-42cc-886a-a3b45be30d75] Running
	I0719 19:56:13.486052   75617 system_pods.go:89] "etcd-auto-179125" [a7474c4c-8b3e-436a-95db-cc3e298511a7] Running
	I0719 19:56:13.486059   75617 system_pods.go:89] "kube-apiserver-auto-179125" [ddaccc3e-a98c-4267-9b05-ccc17aeb155e] Running
	I0719 19:56:13.486065   75617 system_pods.go:89] "kube-controller-manager-auto-179125" [2ca28f39-8ebd-4e54-9838-9d7ed16ed48d] Running
	I0719 19:56:13.486072   75617 system_pods.go:89] "kube-proxy-6z697" [1f70aabc-95bc-47cd-910b-d5db3c77346a] Running
	I0719 19:56:13.486080   75617 system_pods.go:89] "kube-scheduler-auto-179125" [23bee4af-c02a-4068-98f3-d84ed0dcb6c8] Running
	I0719 19:56:13.486086   75617 system_pods.go:89] "storage-provisioner" [4a136357-469d-4fb5-9c89-f4a99666b593] Running
	I0719 19:56:13.486094   75617 system_pods.go:126] duration metric: took 206.104177ms to wait for k8s-apps to be running ...
	I0719 19:56:13.486102   75617 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 19:56:13.486155   75617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:56:13.506859   75617 system_svc.go:56] duration metric: took 20.747024ms WaitForService to wait for kubelet
	I0719 19:56:13.506893   75617 kubeadm.go:582] duration metric: took 5.35393162s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 19:56:13.506923   75617 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:56:13.681711   75617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:56:13.681782   75617 node_conditions.go:123] node cpu capacity is 2
	I0719 19:56:13.681800   75617 node_conditions.go:105] duration metric: took 174.868483ms to run NodePressure ...
	I0719 19:56:13.681815   75617 start.go:241] waiting for startup goroutines ...
	I0719 19:56:13.681825   75617 start.go:246] waiting for cluster config update ...
	I0719 19:56:13.681840   75617 start.go:255] writing updated cluster config ...
	I0719 19:56:13.682193   75617 ssh_runner.go:195] Run: rm -f paused
	I0719 19:56:13.751393   75617 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 19:56:13.754482   75617 out.go:177] * Done! kubectl is now configured to use "auto-179125" cluster and "default" namespace by default
	I0719 19:56:12.245404   76099 out.go:204]   - Booting up control plane ...
	I0719 19:56:12.245538   76099 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 19:56:12.245651   76099 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 19:56:12.245785   76099 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 19:56:12.262960   76099 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 19:56:12.264172   76099 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 19:56:12.264428   76099 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 19:56:12.413592   76099 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 19:56:12.413755   76099 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 19:56:13.414389   76099 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001081192s
	I0719 19:56:13.414596   76099 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 19:56:13.591085   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:13.591734   76497 main.go:141] libmachine: (newest-cni-229943) DBG | unable to find current IP address of domain newest-cni-229943 in network mk-newest-cni-229943
	I0719 19:56:13.591759   76497 main.go:141] libmachine: (newest-cni-229943) DBG | I0719 19:56:13.591646   76616 retry.go:31] will retry after 2.547419156s: waiting for machine to come up
	I0719 19:56:16.141156   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:16.141634   76497 main.go:141] libmachine: (newest-cni-229943) DBG | unable to find current IP address of domain newest-cni-229943 in network mk-newest-cni-229943
	I0719 19:56:16.141676   76497 main.go:141] libmachine: (newest-cni-229943) DBG | I0719 19:56:16.141590   76616 retry.go:31] will retry after 3.304019536s: waiting for machine to come up
	I0719 19:56:19.417537   76099 kubeadm.go:310] [api-check] The API server is healthy after 6.002534065s
	I0719 19:56:19.428272   76099 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 19:56:19.440857   76099 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 19:56:19.482305   76099 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 19:56:19.482497   76099 kubeadm.go:310] [mark-control-plane] Marking the node kindnet-179125 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 19:56:19.496654   76099 kubeadm.go:310] [bootstrap-token] Using token: bdbwvv.1zkv7qjb13er8g7e
	I0719 19:56:19.497960   76099 out.go:204]   - Configuring RBAC rules ...
	I0719 19:56:19.498101   76099 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 19:56:19.505432   76099 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 19:56:19.515962   76099 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 19:56:19.525543   76099 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 19:56:19.529478   76099 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 19:56:19.534520   76099 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 19:56:19.824457   76099 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 19:56:20.247809   76099 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 19:56:20.822206   76099 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 19:56:20.823397   76099 kubeadm.go:310] 
	I0719 19:56:20.823484   76099 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 19:56:20.823501   76099 kubeadm.go:310] 
	I0719 19:56:20.823575   76099 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 19:56:20.823589   76099 kubeadm.go:310] 
	I0719 19:56:20.823632   76099 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 19:56:20.823697   76099 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 19:56:20.823777   76099 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 19:56:20.823785   76099 kubeadm.go:310] 
	I0719 19:56:20.823894   76099 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 19:56:20.823904   76099 kubeadm.go:310] 
	I0719 19:56:20.823972   76099 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 19:56:20.823983   76099 kubeadm.go:310] 
	I0719 19:56:20.824065   76099 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 19:56:20.824186   76099 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 19:56:20.824312   76099 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 19:56:20.824369   76099 kubeadm.go:310] 
	I0719 19:56:20.824492   76099 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 19:56:20.824607   76099 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 19:56:20.824620   76099 kubeadm.go:310] 
	I0719 19:56:20.824744   76099 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bdbwvv.1zkv7qjb13er8g7e \
	I0719 19:56:20.824943   76099 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 \
	I0719 19:56:20.824990   76099 kubeadm.go:310] 	--control-plane 
	I0719 19:56:20.825000   76099 kubeadm.go:310] 
	I0719 19:56:20.825122   76099 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 19:56:20.825137   76099 kubeadm.go:310] 
	I0719 19:56:20.825246   76099 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bdbwvv.1zkv7qjb13er8g7e \
	I0719 19:56:20.825405   76099 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 
	I0719 19:56:20.825555   76099 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 19:56:20.825574   76099 cni.go:84] Creating CNI manager for "kindnet"
	I0719 19:56:20.827366   76099 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0719 19:56:20.828656   76099 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0719 19:56:20.834801   76099 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0719 19:56:20.834821   76099 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0719 19:56:19.446888   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:19.447348   76497 main.go:141] libmachine: (newest-cni-229943) Found IP for machine: 192.168.39.88
	I0719 19:56:19.447376   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has current primary IP address 192.168.39.88 and MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:19.447384   76497 main.go:141] libmachine: (newest-cni-229943) Reserving static IP address...
	I0719 19:56:19.447866   76497 main.go:141] libmachine: (newest-cni-229943) DBG | found host DHCP lease matching {name: "newest-cni-229943", mac: "52:54:00:e4:49:e9", ip: "192.168.39.88"} in network mk-newest-cni-229943: {Iface:virbr2 ExpiryTime:2024-07-19 20:56:13 +0000 UTC Type:0 Mac:52:54:00:e4:49:e9 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:newest-cni-229943 Clientid:01:52:54:00:e4:49:e9}
	I0719 19:56:19.447900   76497 main.go:141] libmachine: (newest-cni-229943) DBG | skip adding static IP to network mk-newest-cni-229943 - found existing host DHCP lease matching {name: "newest-cni-229943", mac: "52:54:00:e4:49:e9", ip: "192.168.39.88"}
	I0719 19:56:19.447914   76497 main.go:141] libmachine: (newest-cni-229943) Reserved static IP address: 192.168.39.88
	I0719 19:56:19.447941   76497 main.go:141] libmachine: (newest-cni-229943) Waiting for SSH to be available...
	I0719 19:56:19.447956   76497 main.go:141] libmachine: (newest-cni-229943) DBG | Getting to WaitForSSH function...
	I0719 19:56:19.450499   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:19.450962   76497 main.go:141] libmachine: (newest-cni-229943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:49:e9", ip: ""} in network mk-newest-cni-229943: {Iface:virbr2 ExpiryTime:2024-07-19 20:56:13 +0000 UTC Type:0 Mac:52:54:00:e4:49:e9 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:newest-cni-229943 Clientid:01:52:54:00:e4:49:e9}
	I0719 19:56:19.451013   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined IP address 192.168.39.88 and MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:19.451087   76497 main.go:141] libmachine: (newest-cni-229943) DBG | Using SSH client type: external
	I0719 19:56:19.451116   76497 main.go:141] libmachine: (newest-cni-229943) DBG | Using SSH private key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/newest-cni-229943/id_rsa (-rw-------)
	I0719 19:56:19.451167   76497 main.go:141] libmachine: (newest-cni-229943) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.88 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19307-14841/.minikube/machines/newest-cni-229943/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 19:56:19.451187   76497 main.go:141] libmachine: (newest-cni-229943) DBG | About to run SSH command:
	I0719 19:56:19.451198   76497 main.go:141] libmachine: (newest-cni-229943) DBG | exit 0
	I0719 19:56:19.580002   76497 main.go:141] libmachine: (newest-cni-229943) DBG | SSH cmd err, output: <nil>: 
	I0719 19:56:19.580361   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetConfigRaw
	I0719 19:56:19.580959   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetIP
	I0719 19:56:19.584109   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:19.584551   76497 main.go:141] libmachine: (newest-cni-229943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:49:e9", ip: ""} in network mk-newest-cni-229943: {Iface:virbr2 ExpiryTime:2024-07-19 20:56:13 +0000 UTC Type:0 Mac:52:54:00:e4:49:e9 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:newest-cni-229943 Clientid:01:52:54:00:e4:49:e9}
	I0719 19:56:19.584569   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined IP address 192.168.39.88 and MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:19.584976   76497 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943/config.json ...
	I0719 19:56:19.585335   76497 machine.go:94] provisionDockerMachine start ...
	I0719 19:56:19.585363   76497 main.go:141] libmachine: (newest-cni-229943) Calling .DriverName
	I0719 19:56:19.585654   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHHostname
	I0719 19:56:19.588009   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:19.588368   76497 main.go:141] libmachine: (newest-cni-229943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:49:e9", ip: ""} in network mk-newest-cni-229943: {Iface:virbr2 ExpiryTime:2024-07-19 20:56:13 +0000 UTC Type:0 Mac:52:54:00:e4:49:e9 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:newest-cni-229943 Clientid:01:52:54:00:e4:49:e9}
	I0719 19:56:19.588390   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined IP address 192.168.39.88 and MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:19.588484   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHPort
	I0719 19:56:19.588693   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHKeyPath
	I0719 19:56:19.588883   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHKeyPath
	I0719 19:56:19.589060   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHUsername
	I0719 19:56:19.589252   76497 main.go:141] libmachine: Using SSH client type: native
	I0719 19:56:19.589451   76497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I0719 19:56:19.589464   76497 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 19:56:19.700143   76497 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 19:56:19.700172   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetMachineName
	I0719 19:56:19.700452   76497 buildroot.go:166] provisioning hostname "newest-cni-229943"
	I0719 19:56:19.700495   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetMachineName
	I0719 19:56:19.700714   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHHostname
	I0719 19:56:19.703716   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:19.704126   76497 main.go:141] libmachine: (newest-cni-229943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:49:e9", ip: ""} in network mk-newest-cni-229943: {Iface:virbr2 ExpiryTime:2024-07-19 20:56:13 +0000 UTC Type:0 Mac:52:54:00:e4:49:e9 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:newest-cni-229943 Clientid:01:52:54:00:e4:49:e9}
	I0719 19:56:19.704163   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined IP address 192.168.39.88 and MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:19.704324   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHPort
	I0719 19:56:19.704516   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHKeyPath
	I0719 19:56:19.704665   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHKeyPath
	I0719 19:56:19.704807   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHUsername
	I0719 19:56:19.704934   76497 main.go:141] libmachine: Using SSH client type: native
	I0719 19:56:19.705109   76497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I0719 19:56:19.705125   76497 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-229943 && echo "newest-cni-229943" | sudo tee /etc/hostname
	I0719 19:56:19.831916   76497 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-229943
	
	I0719 19:56:19.831946   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHHostname
	I0719 19:56:19.835065   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:19.835412   76497 main.go:141] libmachine: (newest-cni-229943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:49:e9", ip: ""} in network mk-newest-cni-229943: {Iface:virbr2 ExpiryTime:2024-07-19 20:56:13 +0000 UTC Type:0 Mac:52:54:00:e4:49:e9 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:newest-cni-229943 Clientid:01:52:54:00:e4:49:e9}
	I0719 19:56:19.835442   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined IP address 192.168.39.88 and MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:19.835652   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHPort
	I0719 19:56:19.835886   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHKeyPath
	I0719 19:56:19.836033   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHKeyPath
	I0719 19:56:19.836194   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHUsername
	I0719 19:56:19.836358   76497 main.go:141] libmachine: Using SSH client type: native
	I0719 19:56:19.836593   76497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I0719 19:56:19.836622   76497 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-229943' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-229943/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-229943' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 19:56:19.956125   76497 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:56:19.956158   76497 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 19:56:19.956180   76497 buildroot.go:174] setting up certificates
	I0719 19:56:19.956211   76497 provision.go:84] configureAuth start
	I0719 19:56:19.956225   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetMachineName
	I0719 19:56:19.956516   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetIP
	I0719 19:56:19.959827   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:19.960219   76497 main.go:141] libmachine: (newest-cni-229943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:49:e9", ip: ""} in network mk-newest-cni-229943: {Iface:virbr2 ExpiryTime:2024-07-19 20:56:13 +0000 UTC Type:0 Mac:52:54:00:e4:49:e9 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:newest-cni-229943 Clientid:01:52:54:00:e4:49:e9}
	I0719 19:56:19.960250   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined IP address 192.168.39.88 and MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:19.960416   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHHostname
	I0719 19:56:19.962825   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:19.963198   76497 main.go:141] libmachine: (newest-cni-229943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:49:e9", ip: ""} in network mk-newest-cni-229943: {Iface:virbr2 ExpiryTime:2024-07-19 20:56:13 +0000 UTC Type:0 Mac:52:54:00:e4:49:e9 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:newest-cni-229943 Clientid:01:52:54:00:e4:49:e9}
	I0719 19:56:19.963225   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined IP address 192.168.39.88 and MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:19.963336   76497 provision.go:143] copyHostCerts
	I0719 19:56:19.963421   76497 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 19:56:19.963434   76497 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 19:56:19.963502   76497 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 19:56:19.963627   76497 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 19:56:19.963640   76497 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 19:56:19.963675   76497 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 19:56:19.963768   76497 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 19:56:19.963782   76497 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 19:56:19.963811   76497 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 19:56:19.963912   76497 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.newest-cni-229943 san=[127.0.0.1 192.168.39.88 localhost minikube newest-cni-229943]
	I0719 19:56:20.026891   76497 provision.go:177] copyRemoteCerts
	I0719 19:56:20.026943   76497 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 19:56:20.026975   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHHostname
	I0719 19:56:20.029729   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:20.030119   76497 main.go:141] libmachine: (newest-cni-229943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:49:e9", ip: ""} in network mk-newest-cni-229943: {Iface:virbr2 ExpiryTime:2024-07-19 20:56:13 +0000 UTC Type:0 Mac:52:54:00:e4:49:e9 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:newest-cni-229943 Clientid:01:52:54:00:e4:49:e9}
	I0719 19:56:20.030146   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined IP address 192.168.39.88 and MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:20.030338   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHPort
	I0719 19:56:20.030526   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHKeyPath
	I0719 19:56:20.030689   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHUsername
	I0719 19:56:20.030837   76497 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/newest-cni-229943/id_rsa Username:docker}
	I0719 19:56:20.116269   76497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0719 19:56:20.142911   76497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 19:56:20.170652   76497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 19:56:20.194855   76497 provision.go:87] duration metric: took 238.627331ms to configureAuth
	I0719 19:56:20.194885   76497 buildroot.go:189] setting minikube options for container-runtime
	I0719 19:56:20.195069   76497 config.go:182] Loaded profile config "newest-cni-229943": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 19:56:20.195146   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHHostname
	I0719 19:56:20.197988   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:20.198337   76497 main.go:141] libmachine: (newest-cni-229943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:49:e9", ip: ""} in network mk-newest-cni-229943: {Iface:virbr2 ExpiryTime:2024-07-19 20:56:13 +0000 UTC Type:0 Mac:52:54:00:e4:49:e9 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:newest-cni-229943 Clientid:01:52:54:00:e4:49:e9}
	I0719 19:56:20.198367   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined IP address 192.168.39.88 and MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:20.198555   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHPort
	I0719 19:56:20.198769   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHKeyPath
	I0719 19:56:20.198968   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHKeyPath
	I0719 19:56:20.199115   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHUsername
	I0719 19:56:20.199306   76497 main.go:141] libmachine: Using SSH client type: native
	I0719 19:56:20.199513   76497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I0719 19:56:20.199553   76497 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 19:56:20.499293   76497 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 19:56:20.499343   76497 machine.go:97] duration metric: took 913.986746ms to provisionDockerMachine
	I0719 19:56:20.499358   76497 start.go:293] postStartSetup for "newest-cni-229943" (driver="kvm2")
	I0719 19:56:20.499376   76497 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 19:56:20.499416   76497 main.go:141] libmachine: (newest-cni-229943) Calling .DriverName
	I0719 19:56:20.499855   76497 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 19:56:20.499892   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHHostname
	I0719 19:56:20.502950   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:20.503363   76497 main.go:141] libmachine: (newest-cni-229943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:49:e9", ip: ""} in network mk-newest-cni-229943: {Iface:virbr2 ExpiryTime:2024-07-19 20:56:13 +0000 UTC Type:0 Mac:52:54:00:e4:49:e9 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:newest-cni-229943 Clientid:01:52:54:00:e4:49:e9}
	I0719 19:56:20.503400   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined IP address 192.168.39.88 and MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:20.503702   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHPort
	I0719 19:56:20.503935   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHKeyPath
	I0719 19:56:20.504098   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHUsername
	I0719 19:56:20.504278   76497 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/newest-cni-229943/id_rsa Username:docker}
	I0719 19:56:20.591206   76497 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 19:56:20.595576   76497 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 19:56:20.595598   76497 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 19:56:20.595665   76497 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 19:56:20.595762   76497 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 19:56:20.595895   76497 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 19:56:20.606078   76497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:56:20.631935   76497 start.go:296] duration metric: took 132.563503ms for postStartSetup
	I0719 19:56:20.631976   76497 fix.go:56] duration metric: took 18.487618082s for fixHost
	I0719 19:56:20.631998   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHHostname
	I0719 19:56:20.634977   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:20.635323   76497 main.go:141] libmachine: (newest-cni-229943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:49:e9", ip: ""} in network mk-newest-cni-229943: {Iface:virbr2 ExpiryTime:2024-07-19 20:56:13 +0000 UTC Type:0 Mac:52:54:00:e4:49:e9 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:newest-cni-229943 Clientid:01:52:54:00:e4:49:e9}
	I0719 19:56:20.635352   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined IP address 192.168.39.88 and MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:20.635527   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHPort
	I0719 19:56:20.635783   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHKeyPath
	I0719 19:56:20.635959   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHKeyPath
	I0719 19:56:20.636146   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHUsername
	I0719 19:56:20.636337   76497 main.go:141] libmachine: Using SSH client type: native
	I0719 19:56:20.636496   76497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.88 22 <nil> <nil>}
	I0719 19:56:20.636512   76497 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 19:56:20.748472   76497 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721418980.725806344
	
	I0719 19:56:20.748501   76497 fix.go:216] guest clock: 1721418980.725806344
	I0719 19:56:20.748511   76497 fix.go:229] Guest: 2024-07-19 19:56:20.725806344 +0000 UTC Remote: 2024-07-19 19:56:20.631981666 +0000 UTC m=+34.225920737 (delta=93.824678ms)
	I0719 19:56:20.748555   76497 fix.go:200] guest clock delta is within tolerance: 93.824678ms
	I0719 19:56:20.748567   76497 start.go:83] releasing machines lock for "newest-cni-229943", held for 18.60423846s
	I0719 19:56:20.748600   76497 main.go:141] libmachine: (newest-cni-229943) Calling .DriverName
	I0719 19:56:20.748838   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetIP
	I0719 19:56:20.751492   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:20.751755   76497 main.go:141] libmachine: (newest-cni-229943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:49:e9", ip: ""} in network mk-newest-cni-229943: {Iface:virbr2 ExpiryTime:2024-07-19 20:56:13 +0000 UTC Type:0 Mac:52:54:00:e4:49:e9 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:newest-cni-229943 Clientid:01:52:54:00:e4:49:e9}
	I0719 19:56:20.751784   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined IP address 192.168.39.88 and MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:20.751933   76497 main.go:141] libmachine: (newest-cni-229943) Calling .DriverName
	I0719 19:56:20.752375   76497 main.go:141] libmachine: (newest-cni-229943) Calling .DriverName
	I0719 19:56:20.752538   76497 main.go:141] libmachine: (newest-cni-229943) Calling .DriverName
	I0719 19:56:20.752674   76497 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 19:56:20.752716   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHHostname
	I0719 19:56:20.752735   76497 ssh_runner.go:195] Run: cat /version.json
	I0719 19:56:20.752769   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHHostname
	I0719 19:56:20.755386   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:20.755686   76497 main.go:141] libmachine: (newest-cni-229943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:49:e9", ip: ""} in network mk-newest-cni-229943: {Iface:virbr2 ExpiryTime:2024-07-19 20:56:13 +0000 UTC Type:0 Mac:52:54:00:e4:49:e9 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:newest-cni-229943 Clientid:01:52:54:00:e4:49:e9}
	I0719 19:56:20.755712   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined IP address 192.168.39.88 and MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:20.755739   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:20.755898   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHPort
	I0719 19:56:20.756042   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHKeyPath
	I0719 19:56:20.756178   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHUsername
	I0719 19:56:20.756200   76497 main.go:141] libmachine: (newest-cni-229943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:49:e9", ip: ""} in network mk-newest-cni-229943: {Iface:virbr2 ExpiryTime:2024-07-19 20:56:13 +0000 UTC Type:0 Mac:52:54:00:e4:49:e9 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:newest-cni-229943 Clientid:01:52:54:00:e4:49:e9}
	I0719 19:56:20.756215   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined IP address 192.168.39.88 and MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:20.756362   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHPort
	I0719 19:56:20.756442   76497 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/newest-cni-229943/id_rsa Username:docker}
	I0719 19:56:20.756549   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHKeyPath
	I0719 19:56:20.756717   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetSSHUsername
	I0719 19:56:20.756855   76497 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/newest-cni-229943/id_rsa Username:docker}
	I0719 19:56:20.837556   76497 ssh_runner.go:195] Run: systemctl --version
	I0719 19:56:20.871579   76497 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 19:56:21.017893   76497 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 19:56:21.024385   76497 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 19:56:21.024466   76497 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 19:56:21.042600   76497 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 19:56:21.042625   76497 start.go:495] detecting cgroup driver to use...
	I0719 19:56:21.042691   76497 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 19:56:21.059743   76497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 19:56:21.075039   76497 docker.go:217] disabling cri-docker service (if available) ...
	I0719 19:56:21.075101   76497 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 19:56:21.090699   76497 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 19:56:21.105444   76497 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 19:56:21.243088   76497 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 19:56:21.446825   76497 docker.go:233] disabling docker service ...
	I0719 19:56:21.446894   76497 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 19:56:21.460649   76497 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 19:56:21.473200   76497 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 19:56:21.589498   76497 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 19:56:21.731314   76497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 19:56:21.746706   76497 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 19:56:21.765616   76497 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0719 19:56:21.765694   76497 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:56:21.777037   76497 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 19:56:21.777119   76497 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:56:21.788269   76497 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:56:21.799322   76497 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:56:21.809791   76497 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 19:56:21.820344   76497 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:56:21.830704   76497 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:56:21.850828   76497 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:56:21.861228   76497 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 19:56:21.871043   76497 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 19:56:21.871106   76497 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 19:56:21.882974   76497 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 19:56:21.892056   76497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:56:22.013728   76497 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 19:56:22.147018   76497 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 19:56:22.147085   76497 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 19:56:22.151691   76497 start.go:563] Will wait 60s for crictl version
	I0719 19:56:22.151764   76497 ssh_runner.go:195] Run: which crictl
	I0719 19:56:22.155740   76497 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 19:56:22.199158   76497 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 19:56:22.199259   76497 ssh_runner.go:195] Run: crio --version
	I0719 19:56:22.226452   76497 ssh_runner.go:195] Run: crio --version
	I0719 19:56:22.260478   76497 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0719 19:56:22.261887   76497 main.go:141] libmachine: (newest-cni-229943) Calling .GetIP
	I0719 19:56:22.264589   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:22.264952   76497 main.go:141] libmachine: (newest-cni-229943) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:49:e9", ip: ""} in network mk-newest-cni-229943: {Iface:virbr2 ExpiryTime:2024-07-19 20:56:13 +0000 UTC Type:0 Mac:52:54:00:e4:49:e9 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:newest-cni-229943 Clientid:01:52:54:00:e4:49:e9}
	I0719 19:56:22.265009   76497 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined IP address 192.168.39.88 and MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:56:22.265238   76497 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 19:56:22.269016   76497 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:56:22.283353   76497 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0719 19:56:20.854285   76099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0719 19:56:21.134202   76099 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 19:56:21.134347   76099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:56:21.134367   76099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-179125 minikube.k8s.io/updated_at=2024_07_19T19_56_21_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221 minikube.k8s.io/name=kindnet-179125 minikube.k8s.io/primary=true
	I0719 19:56:21.149439   76099 ops.go:34] apiserver oom_adj: -16
	I0719 19:56:21.292625   76099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:56:21.793419   76099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:56:22.292855   76099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:56:22.793578   76099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:56:23.293554   76099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:56:23.793112   76099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:56:24.293106   76099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:56:24.793628   76099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:56:25.293073   76099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:56:25.792721   76099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:56:22.284742   76497 kubeadm.go:883] updating cluster {Name:newest-cni-229943 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:newest-cni-229943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.88 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] Star
tHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 19:56:22.284885   76497 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0719 19:56:22.284960   76497 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:56:22.327806   76497 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0719 19:56:22.327883   76497 ssh_runner.go:195] Run: which lz4
	I0719 19:56:22.332626   76497 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 19:56:22.336684   76497 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 19:56:22.336720   76497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (387176433 bytes)
	I0719 19:56:23.563462   76497 crio.go:462] duration metric: took 1.230864806s to copy over tarball
	I0719 19:56:23.563595   76497 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 19:56:25.776068   76497 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.212442006s)
	I0719 19:56:25.776099   76497 crio.go:469] duration metric: took 2.212595284s to extract the tarball
	I0719 19:56:25.776109   76497 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 19:56:25.813716   76497 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:56:25.857560   76497 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 19:56:25.857580   76497 cache_images.go:84] Images are preloaded, skipping loading
	I0719 19:56:25.857587   76497 kubeadm.go:934] updating node { 192.168.39.88 8443 v1.31.0-beta.0 crio true true} ...
	I0719 19:56:25.857747   76497 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-229943 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.88
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-229943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 19:56:25.857852   76497 ssh_runner.go:195] Run: crio config
	I0719 19:56:25.907611   76497 cni.go:84] Creating CNI manager for ""
	I0719 19:56:25.907633   76497 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:56:25.907647   76497 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0719 19:56:25.907671   76497 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.88 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-229943 NodeName:newest-cni-229943 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.88"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.39.88 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 19:56:25.907823   76497 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.88
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-229943"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.88
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.88"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 19:56:25.907922   76497 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0719 19:56:25.918365   76497 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 19:56:25.918435   76497 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 19:56:25.927895   76497 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (360 bytes)
	I0719 19:56:25.944823   76497 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0719 19:56:25.962690   76497 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I0719 19:56:25.979066   76497 ssh_runner.go:195] Run: grep 192.168.39.88	control-plane.minikube.internal$ /etc/hosts
	I0719 19:56:25.982858   76497 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.88	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:56:25.994920   76497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:56:26.122715   76497 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:56:26.142816   76497 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943 for IP: 192.168.39.88
	I0719 19:56:26.142836   76497 certs.go:194] generating shared ca certs ...
	I0719 19:56:26.142852   76497 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:56:26.143007   76497 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 19:56:26.143049   76497 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 19:56:26.143058   76497 certs.go:256] generating profile certs ...
	I0719 19:56:26.143124   76497 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943/client.key
	I0719 19:56:26.143190   76497 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943/apiserver.key.f8d6a172
	I0719 19:56:26.143245   76497 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943/proxy-client.key
	I0719 19:56:26.143446   76497 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 19:56:26.143492   76497 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 19:56:26.143512   76497 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 19:56:26.143547   76497 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 19:56:26.143575   76497 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 19:56:26.143595   76497 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 19:56:26.143633   76497 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:56:26.144321   76497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 19:56:26.178120   76497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 19:56:26.226742   76497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 19:56:26.262641   76497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 19:56:26.290968   76497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0719 19:56:26.317408   76497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 19:56:26.340819   76497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 19:56:26.366716   76497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 19:56:26.391168   76497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 19:56:26.414227   76497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 19:56:26.438418   76497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	
	
	==> CRI-O <==
	Jul 19 19:56:27 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:56:27.915054895Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d1c7e821430f2c3e648dee20b159498d81e180c353242de20e2e6bd1f2a1c2c3,PodSandboxId:828d1e03a8884251e68a9a9d4c4e6258b00a10b348ac5093b0f25adde3763749,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721418015823355118,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 732e14b5-3120-491f-be0f-2d2ffcd7a799,},Annotations:map[string]string{io.kubernetes.container.hash: 91f19603,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee742ac25345f19173e819339a637dea94073ade83bba182ff4ea10c5ee760af,PodSandboxId:48436a33fb598e082507fcb5070e4486133bcad969e81210561c4eca10560d33,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721418015590702250,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6qndw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b80cbbf-e1de-449d-affb-90c7e626e11a,},Annotations:map[string]string{io.kubernetes.container.hash: 27a8a211,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c65de2de8482e19415059ef3be3788d783af6af981d8a04d14179e40bb76334,PodSandboxId:986c786f16e6c5d41f4779317c3902b3533637fee4a239c794aeecd8ddb01a90,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721418015245306711,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gmljj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e3c23a75-7c98-4704-b159-465a46e71deb,},Annotations:map[string]string{io.kubernetes.container.hash: 34f880b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27c5137e81ce4a3a4314ee9abb74c9b80eee504b7940b5261240bff2ecf0ea94,PodSandboxId:48b5b8f58679240b5c96e27a79d0867e4a7a49f35b6780532d40e02a6684a7e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1721418014679272039,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xd6jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 699a18fd-c6e6-4c85-9799-0630bb7932b9,},Annotations:map[string]string{io.kubernetes.container.hash: b9885ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad1f5dc72ccb7b8f8d8357d5c9d23d139367b15fbb8a70a0e82874d09f9d1dd,PodSandboxId:9faabf77253d5b5b1da4a12659ca5c251c72eb04eef845991496e23e4bdeb812,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721417995599739090,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096db0e04661bf40752b9a4898f2bcf9,},Annotations:map[string]string{io.kubernetes.container.hash: fd630f66,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd8e28deb2f51e6c4ed6ca939ab870a348addc9d1173cef06f05da3c3df20a15,PodSandboxId:ebb805e3a4c5100256765687311fbafd8477d9c6303a3287f507489f281519c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721417995590555458,Labels:map[string]string{io.kub
ernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c50c783d6d5435770e83874c37ce2b6,},Annotations:map[string]string{io.kubernetes.container.hash: a8d50415,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec40058eeac298f291cf9203f510fbc71ccdacc8f4d31a5b3fc25f3c4a6d9b4d,PodSandboxId:089d5c0dbd2753b18e533862fb795f58cfb009aaf979c3250a5fe4bbc4b7a8e9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721417995594882175,Labels:map[string]string{io.kuber
netes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2593cfd7c8bd9f8e3f97dcfb50f20c2c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b342b0e80323559300909a4db3e718e4bd038e8b166c454b9a82aedb44323fbe,PodSandboxId:131d57cb0ded26d1dc359e5aa09ecf9de373c1a167ba4014ffe6f723d2a0c1cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721417995498101615,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1619e3139d20f5a1024703bbbd8ddea,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dab364e723e9421448ef1ff573d84fe92df839a75a1133f07bf2a84ebd7950e,PodSandboxId:92eaaaa01d03c5bddeee3a99c87fac476bad631e01eb88e12013113c82a3bbf5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721417684847346396,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c50c783d6d5435770e83874c37ce2b6,},Annotations:map[string]string{io.kubernetes.container.hash: a8d50415,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a466160b-bde7-416a-9386-5165b214cefb name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:56:27 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:56:27.956892831Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=450a2aa1-d30f-4569-b40b-3afec66aa10a name=/runtime.v1.RuntimeService/Version
	Jul 19 19:56:27 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:56:27.957067345Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=450a2aa1-d30f-4569-b40b-3afec66aa10a name=/runtime.v1.RuntimeService/Version
	Jul 19 19:56:27 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:56:27.958657142Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c7c37b84-5d71-4eb1-b75d-2b2f6c259484 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:56:27 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:56:27.960386177Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721418987960344793,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c7c37b84-5d71-4eb1-b75d-2b2f6c259484 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:56:27 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:56:27.961139617Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ee09e148-1242-486d-b7ba-98a06c352b02 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:56:27 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:56:27.961213396Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ee09e148-1242-486d-b7ba-98a06c352b02 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:56:27 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:56:27.961454365Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d1c7e821430f2c3e648dee20b159498d81e180c353242de20e2e6bd1f2a1c2c3,PodSandboxId:828d1e03a8884251e68a9a9d4c4e6258b00a10b348ac5093b0f25adde3763749,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721418015823355118,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 732e14b5-3120-491f-be0f-2d2ffcd7a799,},Annotations:map[string]string{io.kubernetes.container.hash: 91f19603,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee742ac25345f19173e819339a637dea94073ade83bba182ff4ea10c5ee760af,PodSandboxId:48436a33fb598e082507fcb5070e4486133bcad969e81210561c4eca10560d33,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721418015590702250,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6qndw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b80cbbf-e1de-449d-affb-90c7e626e11a,},Annotations:map[string]string{io.kubernetes.container.hash: 27a8a211,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c65de2de8482e19415059ef3be3788d783af6af981d8a04d14179e40bb76334,PodSandboxId:986c786f16e6c5d41f4779317c3902b3533637fee4a239c794aeecd8ddb01a90,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721418015245306711,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gmljj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e3c23a75-7c98-4704-b159-465a46e71deb,},Annotations:map[string]string{io.kubernetes.container.hash: 34f880b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27c5137e81ce4a3a4314ee9abb74c9b80eee504b7940b5261240bff2ecf0ea94,PodSandboxId:48b5b8f58679240b5c96e27a79d0867e4a7a49f35b6780532d40e02a6684a7e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1721418014679272039,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xd6jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 699a18fd-c6e6-4c85-9799-0630bb7932b9,},Annotations:map[string]string{io.kubernetes.container.hash: b9885ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad1f5dc72ccb7b8f8d8357d5c9d23d139367b15fbb8a70a0e82874d09f9d1dd,PodSandboxId:9faabf77253d5b5b1da4a12659ca5c251c72eb04eef845991496e23e4bdeb812,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721417995599739090,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096db0e04661bf40752b9a4898f2bcf9,},Annotations:map[string]string{io.kubernetes.container.hash: fd630f66,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd8e28deb2f51e6c4ed6ca939ab870a348addc9d1173cef06f05da3c3df20a15,PodSandboxId:ebb805e3a4c5100256765687311fbafd8477d9c6303a3287f507489f281519c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721417995590555458,Labels:map[string]string{io.kub
ernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c50c783d6d5435770e83874c37ce2b6,},Annotations:map[string]string{io.kubernetes.container.hash: a8d50415,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec40058eeac298f291cf9203f510fbc71ccdacc8f4d31a5b3fc25f3c4a6d9b4d,PodSandboxId:089d5c0dbd2753b18e533862fb795f58cfb009aaf979c3250a5fe4bbc4b7a8e9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721417995594882175,Labels:map[string]string{io.kuber
netes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2593cfd7c8bd9f8e3f97dcfb50f20c2c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b342b0e80323559300909a4db3e718e4bd038e8b166c454b9a82aedb44323fbe,PodSandboxId:131d57cb0ded26d1dc359e5aa09ecf9de373c1a167ba4014ffe6f723d2a0c1cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721417995498101615,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1619e3139d20f5a1024703bbbd8ddea,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dab364e723e9421448ef1ff573d84fe92df839a75a1133f07bf2a84ebd7950e,PodSandboxId:92eaaaa01d03c5bddeee3a99c87fac476bad631e01eb88e12013113c82a3bbf5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721417684847346396,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c50c783d6d5435770e83874c37ce2b6,},Annotations:map[string]string{io.kubernetes.container.hash: a8d50415,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ee09e148-1242-486d-b7ba-98a06c352b02 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:56:28 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:56:28.002697330Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2c8e30c3-85f5-4e34-b54b-711036e56c06 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:56:28 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:56:28.002890983Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2c8e30c3-85f5-4e34-b54b-711036e56c06 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:56:28 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:56:28.004184882Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cdc492e7-b0b9-44c8-a06f-f6ea37961e8d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:56:28 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:56:28.004673074Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721418988004646969,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cdc492e7-b0b9-44c8-a06f-f6ea37961e8d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:56:28 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:56:28.005275636Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=657822dd-1427-4279-ac62-3f5b46c995c2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:56:28 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:56:28.005348470Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=657822dd-1427-4279-ac62-3f5b46c995c2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:56:28 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:56:28.005746530Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d1c7e821430f2c3e648dee20b159498d81e180c353242de20e2e6bd1f2a1c2c3,PodSandboxId:828d1e03a8884251e68a9a9d4c4e6258b00a10b348ac5093b0f25adde3763749,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721418015823355118,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 732e14b5-3120-491f-be0f-2d2ffcd7a799,},Annotations:map[string]string{io.kubernetes.container.hash: 91f19603,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee742ac25345f19173e819339a637dea94073ade83bba182ff4ea10c5ee760af,PodSandboxId:48436a33fb598e082507fcb5070e4486133bcad969e81210561c4eca10560d33,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721418015590702250,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6qndw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b80cbbf-e1de-449d-affb-90c7e626e11a,},Annotations:map[string]string{io.kubernetes.container.hash: 27a8a211,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c65de2de8482e19415059ef3be3788d783af6af981d8a04d14179e40bb76334,PodSandboxId:986c786f16e6c5d41f4779317c3902b3533637fee4a239c794aeecd8ddb01a90,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721418015245306711,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gmljj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e3c23a75-7c98-4704-b159-465a46e71deb,},Annotations:map[string]string{io.kubernetes.container.hash: 34f880b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27c5137e81ce4a3a4314ee9abb74c9b80eee504b7940b5261240bff2ecf0ea94,PodSandboxId:48b5b8f58679240b5c96e27a79d0867e4a7a49f35b6780532d40e02a6684a7e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1721418014679272039,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xd6jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 699a18fd-c6e6-4c85-9799-0630bb7932b9,},Annotations:map[string]string{io.kubernetes.container.hash: b9885ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad1f5dc72ccb7b8f8d8357d5c9d23d139367b15fbb8a70a0e82874d09f9d1dd,PodSandboxId:9faabf77253d5b5b1da4a12659ca5c251c72eb04eef845991496e23e4bdeb812,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721417995599739090,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096db0e04661bf40752b9a4898f2bcf9,},Annotations:map[string]string{io.kubernetes.container.hash: fd630f66,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd8e28deb2f51e6c4ed6ca939ab870a348addc9d1173cef06f05da3c3df20a15,PodSandboxId:ebb805e3a4c5100256765687311fbafd8477d9c6303a3287f507489f281519c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721417995590555458,Labels:map[string]string{io.kub
ernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c50c783d6d5435770e83874c37ce2b6,},Annotations:map[string]string{io.kubernetes.container.hash: a8d50415,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec40058eeac298f291cf9203f510fbc71ccdacc8f4d31a5b3fc25f3c4a6d9b4d,PodSandboxId:089d5c0dbd2753b18e533862fb795f58cfb009aaf979c3250a5fe4bbc4b7a8e9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721417995594882175,Labels:map[string]string{io.kuber
netes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2593cfd7c8bd9f8e3f97dcfb50f20c2c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b342b0e80323559300909a4db3e718e4bd038e8b166c454b9a82aedb44323fbe,PodSandboxId:131d57cb0ded26d1dc359e5aa09ecf9de373c1a167ba4014ffe6f723d2a0c1cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721417995498101615,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1619e3139d20f5a1024703bbbd8ddea,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dab364e723e9421448ef1ff573d84fe92df839a75a1133f07bf2a84ebd7950e,PodSandboxId:92eaaaa01d03c5bddeee3a99c87fac476bad631e01eb88e12013113c82a3bbf5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721417684847346396,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c50c783d6d5435770e83874c37ce2b6,},Annotations:map[string]string{io.kubernetes.container.hash: a8d50415,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=657822dd-1427-4279-ac62-3f5b46c995c2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:56:28 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:56:28.021704172Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=3f48379e-ef0d-4a3f-be9d-33f70cce9add name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 19 19:56:28 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:56:28.022131201Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:c3072ddcf45da0b3687a00a5cf8b09a413d95ed50fa5c31321717a13df48de74,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-kz2bz,Uid:dc109926-e072-47e1-944c-a9614741fdaa,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721418015721302383,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-kz2bz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc109926-e072-47e1-944c-a9614741fdaa,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-19T19:40:15.408436654Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:828d1e03a8884251e68a9a9d4c4e6258b00a10b348ac5093b0f25adde3763749,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:732e14b5-3120-491f-be0f-2d2f
fcd7a799,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721418015365331567,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 732e14b5-3120-491f-be0f-2d2ffcd7a799,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provision
er\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-19T19:40:15.048889968Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:48436a33fb598e082507fcb5070e4486133bcad969e81210561c4eca10560d33,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-6qndw,Uid:2b80cbbf-e1de-449d-affb-90c7e626e11a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721418014943375150,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-6qndw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b80cbbf-e1de-449d-affb-90c7e626e11a,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-19T19:40:14.332404100Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:986c786f16e6c5d41f4779317c3902b3533637fee4a239c794aeecd8ddb01a90,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-gmljj,Uid:e3c23a75
-7c98-4704-b159-465a46e71deb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721418014727451195,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-gmljj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3c23a75-7c98-4704-b159-465a46e71deb,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-19T19:40:14.387060669Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:48b5b8f58679240b5c96e27a79d0867e4a7a49f35b6780532d40e02a6684a7e8,Metadata:&PodSandboxMetadata{Name:kube-proxy-xd6jl,Uid:699a18fd-c6e6-4c85-9799-0630bb7932b9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721418014464458186,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-xd6jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 699a18fd-c6e6-4c85-9799-0630bb7932b9,k8s-app: kube-pro
xy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-19T19:40:13.535283237Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9faabf77253d5b5b1da4a12659ca5c251c72eb04eef845991496e23e4bdeb812,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-578533,Uid:096db0e04661bf40752b9a4898f2bcf9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721417995370619125,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096db0e04661bf40752b9a4898f2bcf9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.104:2379,kubernetes.io/config.hash: 096db0e04661bf40752b9a4898f2bcf9,kubernetes.io/config.seen: 2024-07-19T19:39:54.916929225Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ebb805e3a4c5100256765687311fbafd847
7d9c6303a3287f507489f281519c9,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-578533,Uid:4c50c783d6d5435770e83874c37ce2b6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721417995365214257,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c50c783d6d5435770e83874c37ce2b6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.104:8444,kubernetes.io/config.hash: 4c50c783d6d5435770e83874c37ce2b6,kubernetes.io/config.seen: 2024-07-19T19:39:54.916930350Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:089d5c0dbd2753b18e533862fb795f58cfb009aaf979c3250a5fe4bbc4b7a8e9,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-578533,Uid:2593cfd7c8bd9f8e3f97dcfb50f20c2c,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1721417995353688954,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2593cfd7c8bd9f8e3f97dcfb50f20c2c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2593cfd7c8bd9f8e3f97dcfb50f20c2c,kubernetes.io/config.seen: 2024-07-19T19:39:54.916928088Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:131d57cb0ded26d1dc359e5aa09ecf9de373c1a167ba4014ffe6f723d2a0c1cc,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-578533,Uid:d1619e3139d20f5a1024703bbbd8ddea,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721417995348140004,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: d1619e3139d20f5a1024703bbbd8ddea,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d1619e3139d20f5a1024703bbbd8ddea,kubernetes.io/config.seen: 2024-07-19T19:39:54.916924022Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:92eaaaa01d03c5bddeee3a99c87fac476bad631e01eb88e12013113c82a3bbf5,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-578533,Uid:4c50c783d6d5435770e83874c37ce2b6,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721417662161128401,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c50c783d6d5435770e83874c37ce2b6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.104:8444,kubernetes.io/config.hash: 4c50c783d6d5435770e83874c37ce2b6,kubernetes.io/config.s
een: 2024-07-19T19:34:21.704452194Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=3f48379e-ef0d-4a3f-be9d-33f70cce9add name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 19 19:56:28 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:56:28.022936599Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=22471c6c-119a-4cca-a147-1325fba4a5bf name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:56:28 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:56:28.023194881Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=22471c6c-119a-4cca-a147-1325fba4a5bf name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:56:28 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:56:28.023479828Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d1c7e821430f2c3e648dee20b159498d81e180c353242de20e2e6bd1f2a1c2c3,PodSandboxId:828d1e03a8884251e68a9a9d4c4e6258b00a10b348ac5093b0f25adde3763749,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721418015823355118,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 732e14b5-3120-491f-be0f-2d2ffcd7a799,},Annotations:map[string]string{io.kubernetes.container.hash: 91f19603,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee742ac25345f19173e819339a637dea94073ade83bba182ff4ea10c5ee760af,PodSandboxId:48436a33fb598e082507fcb5070e4486133bcad969e81210561c4eca10560d33,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721418015590702250,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6qndw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b80cbbf-e1de-449d-affb-90c7e626e11a,},Annotations:map[string]string{io.kubernetes.container.hash: 27a8a211,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c65de2de8482e19415059ef3be3788d783af6af981d8a04d14179e40bb76334,PodSandboxId:986c786f16e6c5d41f4779317c3902b3533637fee4a239c794aeecd8ddb01a90,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721418015245306711,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gmljj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e3c23a75-7c98-4704-b159-465a46e71deb,},Annotations:map[string]string{io.kubernetes.container.hash: 34f880b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27c5137e81ce4a3a4314ee9abb74c9b80eee504b7940b5261240bff2ecf0ea94,PodSandboxId:48b5b8f58679240b5c96e27a79d0867e4a7a49f35b6780532d40e02a6684a7e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1721418014679272039,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xd6jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 699a18fd-c6e6-4c85-9799-0630bb7932b9,},Annotations:map[string]string{io.kubernetes.container.hash: b9885ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad1f5dc72ccb7b8f8d8357d5c9d23d139367b15fbb8a70a0e82874d09f9d1dd,PodSandboxId:9faabf77253d5b5b1da4a12659ca5c251c72eb04eef845991496e23e4bdeb812,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721417995599739090,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096db0e04661bf40752b9a4898f2bcf9,},Annotations:map[string]string{io.kubernetes.container.hash: fd630f66,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd8e28deb2f51e6c4ed6ca939ab870a348addc9d1173cef06f05da3c3df20a15,PodSandboxId:ebb805e3a4c5100256765687311fbafd8477d9c6303a3287f507489f281519c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721417995590555458,Labels:map[string]string{io.kub
ernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c50c783d6d5435770e83874c37ce2b6,},Annotations:map[string]string{io.kubernetes.container.hash: a8d50415,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec40058eeac298f291cf9203f510fbc71ccdacc8f4d31a5b3fc25f3c4a6d9b4d,PodSandboxId:089d5c0dbd2753b18e533862fb795f58cfb009aaf979c3250a5fe4bbc4b7a8e9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721417995594882175,Labels:map[string]string{io.kuber
netes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2593cfd7c8bd9f8e3f97dcfb50f20c2c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b342b0e80323559300909a4db3e718e4bd038e8b166c454b9a82aedb44323fbe,PodSandboxId:131d57cb0ded26d1dc359e5aa09ecf9de373c1a167ba4014ffe6f723d2a0c1cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721417995498101615,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1619e3139d20f5a1024703bbbd8ddea,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6dab364e723e9421448ef1ff573d84fe92df839a75a1133f07bf2a84ebd7950e,PodSandboxId:92eaaaa01d03c5bddeee3a99c87fac476bad631e01eb88e12013113c82a3bbf5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721417684847346396,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c50c783d6d5435770e83874c37ce2b6,},Annotations:map[string]string{io.kubernetes.container.hash: a8d50415,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=22471c6c-119a-4cca-a147-1325fba4a5bf name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:56:28 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:56:28.342508525Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e8ac4b7e-15c9-4f92-829e-1390f7188305 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 19 19:56:28 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:56:28.342808208Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:c3072ddcf45da0b3687a00a5cf8b09a413d95ed50fa5c31321717a13df48de74,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-kz2bz,Uid:dc109926-e072-47e1-944c-a9614741fdaa,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721418015721302383,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-kz2bz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc109926-e072-47e1-944c-a9614741fdaa,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-19T19:40:15.408436654Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:828d1e03a8884251e68a9a9d4c4e6258b00a10b348ac5093b0f25adde3763749,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:732e14b5-3120-491f-be0f-2d2f
fcd7a799,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721418015365331567,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 732e14b5-3120-491f-be0f-2d2ffcd7a799,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provision
er\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-19T19:40:15.048889968Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:48436a33fb598e082507fcb5070e4486133bcad969e81210561c4eca10560d33,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-6qndw,Uid:2b80cbbf-e1de-449d-affb-90c7e626e11a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721418014943375150,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-6qndw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b80cbbf-e1de-449d-affb-90c7e626e11a,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-19T19:40:14.332404100Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:986c786f16e6c5d41f4779317c3902b3533637fee4a239c794aeecd8ddb01a90,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-gmljj,Uid:e3c23a75
-7c98-4704-b159-465a46e71deb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721418014727451195,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-gmljj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3c23a75-7c98-4704-b159-465a46e71deb,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-19T19:40:14.387060669Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:48b5b8f58679240b5c96e27a79d0867e4a7a49f35b6780532d40e02a6684a7e8,Metadata:&PodSandboxMetadata{Name:kube-proxy-xd6jl,Uid:699a18fd-c6e6-4c85-9799-0630bb7932b9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721418014464458186,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-xd6jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 699a18fd-c6e6-4c85-9799-0630bb7932b9,k8s-app: kube-pro
xy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-19T19:40:13.535283237Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9faabf77253d5b5b1da4a12659ca5c251c72eb04eef845991496e23e4bdeb812,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-578533,Uid:096db0e04661bf40752b9a4898f2bcf9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721417995370619125,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096db0e04661bf40752b9a4898f2bcf9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.104:2379,kubernetes.io/config.hash: 096db0e04661bf40752b9a4898f2bcf9,kubernetes.io/config.seen: 2024-07-19T19:39:54.916929225Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ebb805e3a4c5100256765687311fbafd847
7d9c6303a3287f507489f281519c9,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-578533,Uid:4c50c783d6d5435770e83874c37ce2b6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721417995365214257,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c50c783d6d5435770e83874c37ce2b6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.104:8444,kubernetes.io/config.hash: 4c50c783d6d5435770e83874c37ce2b6,kubernetes.io/config.seen: 2024-07-19T19:39:54.916930350Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:089d5c0dbd2753b18e533862fb795f58cfb009aaf979c3250a5fe4bbc4b7a8e9,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-578533,Uid:2593cfd7c8bd9f8e3f97dcfb50f20c2c,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1721417995353688954,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2593cfd7c8bd9f8e3f97dcfb50f20c2c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2593cfd7c8bd9f8e3f97dcfb50f20c2c,kubernetes.io/config.seen: 2024-07-19T19:39:54.916928088Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:131d57cb0ded26d1dc359e5aa09ecf9de373c1a167ba4014ffe6f723d2a0c1cc,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-578533,Uid:d1619e3139d20f5a1024703bbbd8ddea,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721417995348140004,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: d1619e3139d20f5a1024703bbbd8ddea,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d1619e3139d20f5a1024703bbbd8ddea,kubernetes.io/config.seen: 2024-07-19T19:39:54.916924022Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=e8ac4b7e-15c9-4f92-829e-1390f7188305 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 19 19:56:28 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:56:28.343484996Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0bbebf26-1076-4a28-b771-1eea5850cf71 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:56:28 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:56:28.343574018Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0bbebf26-1076-4a28-b771-1eea5850cf71 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:56:28 default-k8s-diff-port-578533 crio[735]: time="2024-07-19 19:56:28.343829055Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d1c7e821430f2c3e648dee20b159498d81e180c353242de20e2e6bd1f2a1c2c3,PodSandboxId:828d1e03a8884251e68a9a9d4c4e6258b00a10b348ac5093b0f25adde3763749,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721418015823355118,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 732e14b5-3120-491f-be0f-2d2ffcd7a799,},Annotations:map[string]string{io.kubernetes.container.hash: 91f19603,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee742ac25345f19173e819339a637dea94073ade83bba182ff4ea10c5ee760af,PodSandboxId:48436a33fb598e082507fcb5070e4486133bcad969e81210561c4eca10560d33,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721418015590702250,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6qndw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b80cbbf-e1de-449d-affb-90c7e626e11a,},Annotations:map[string]string{io.kubernetes.container.hash: 27a8a211,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c65de2de8482e19415059ef3be3788d783af6af981d8a04d14179e40bb76334,PodSandboxId:986c786f16e6c5d41f4779317c3902b3533637fee4a239c794aeecd8ddb01a90,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721418015245306711,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gmljj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e3c23a75-7c98-4704-b159-465a46e71deb,},Annotations:map[string]string{io.kubernetes.container.hash: 34f880b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27c5137e81ce4a3a4314ee9abb74c9b80eee504b7940b5261240bff2ecf0ea94,PodSandboxId:48b5b8f58679240b5c96e27a79d0867e4a7a49f35b6780532d40e02a6684a7e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING
,CreatedAt:1721418014679272039,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xd6jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 699a18fd-c6e6-4c85-9799-0630bb7932b9,},Annotations:map[string]string{io.kubernetes.container.hash: b9885ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad1f5dc72ccb7b8f8d8357d5c9d23d139367b15fbb8a70a0e82874d09f9d1dd,PodSandboxId:9faabf77253d5b5b1da4a12659ca5c251c72eb04eef845991496e23e4bdeb812,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721417995599739090,Labels:ma
p[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096db0e04661bf40752b9a4898f2bcf9,},Annotations:map[string]string{io.kubernetes.container.hash: fd630f66,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd8e28deb2f51e6c4ed6ca939ab870a348addc9d1173cef06f05da3c3df20a15,PodSandboxId:ebb805e3a4c5100256765687311fbafd8477d9c6303a3287f507489f281519c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721417995590555458,Labels:map[string]string{io.kub
ernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c50c783d6d5435770e83874c37ce2b6,},Annotations:map[string]string{io.kubernetes.container.hash: a8d50415,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec40058eeac298f291cf9203f510fbc71ccdacc8f4d31a5b3fc25f3c4a6d9b4d,PodSandboxId:089d5c0dbd2753b18e533862fb795f58cfb009aaf979c3250a5fe4bbc4b7a8e9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721417995594882175,Labels:map[string]string{io.kuber
netes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2593cfd7c8bd9f8e3f97dcfb50f20c2c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b342b0e80323559300909a4db3e718e4bd038e8b166c454b9a82aedb44323fbe,PodSandboxId:131d57cb0ded26d1dc359e5aa09ecf9de373c1a167ba4014ffe6f723d2a0c1cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721417995498101615,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-578533,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1619e3139d20f5a1024703bbbd8ddea,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0bbebf26-1076-4a28-b771-1eea5850cf71 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d1c7e821430f2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   828d1e03a8884       storage-provisioner
	ee742ac25345f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   48436a33fb598       coredns-7db6d8ff4d-6qndw
	2c65de2de8482       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   986c786f16e6c       coredns-7db6d8ff4d-gmljj
	27c5137e81ce4       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   16 minutes ago      Running             kube-proxy                0                   48b5b8f586792       kube-proxy-xd6jl
	7ad1f5dc72ccb       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   16 minutes ago      Running             etcd                      2                   9faabf77253d5       etcd-default-k8s-diff-port-578533
	ec40058eeac29       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   16 minutes ago      Running             kube-scheduler            2                   089d5c0dbd275       kube-scheduler-default-k8s-diff-port-578533
	bd8e28deb2f51       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   16 minutes ago      Running             kube-apiserver            3                   ebb805e3a4c51       kube-apiserver-default-k8s-diff-port-578533
	b342b0e803235       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   16 minutes ago      Running             kube-controller-manager   3                   131d57cb0ded2       kube-controller-manager-default-k8s-diff-port-578533
	6dab364e723e9       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   21 minutes ago      Exited              kube-apiserver            2                   92eaaaa01d03c       kube-apiserver-default-k8s-diff-port-578533
	
	
	==> coredns [2c65de2de8482e19415059ef3be3788d783af6af981d8a04d14179e40bb76334] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [ee742ac25345f19173e819339a637dea94073ade83bba182ff4ea10c5ee760af] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-578533
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-578533
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221
	                    minikube.k8s.io/name=default-k8s-diff-port-578533
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T19_40_01_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 19:39:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-578533
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 19:56:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 19:55:39 +0000   Fri, 19 Jul 2024 19:39:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 19:55:39 +0000   Fri, 19 Jul 2024 19:39:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 19:55:39 +0000   Fri, 19 Jul 2024 19:39:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 19:55:39 +0000   Fri, 19 Jul 2024 19:39:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.104
	  Hostname:    default-k8s-diff-port-578533
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2511e8bbb67146248781b4996db1e01a
	  System UUID:                2511e8bb-b671-4624-8781-b4996db1e01a
	  Boot ID:                    3e32c8bc-1bd0-40f2-b12c-953fae7d9bc8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-6qndw                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-gmljj                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-default-k8s-diff-port-578533                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kube-apiserver-default-k8s-diff-port-578533             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-578533    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-xd6jl                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-default-k8s-diff-port-578533             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 metrics-server-569cc877fc-kz2bz                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node default-k8s-diff-port-578533 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node default-k8s-diff-port-578533 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node default-k8s-diff-port-578533 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m   node-controller  Node default-k8s-diff-port-578533 event: Registered Node default-k8s-diff-port-578533 in Controller
	
	
	==> dmesg <==
	[  +0.050896] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036953] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jul19 19:34] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.812550] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.517250] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.098814] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.063427] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056504] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +0.172032] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.140914] systemd-fstab-generator[689]: Ignoring "noauto" option for root device
	[  +0.263527] systemd-fstab-generator[719]: Ignoring "noauto" option for root device
	[  +4.166834] systemd-fstab-generator[818]: Ignoring "noauto" option for root device
	[  +2.261200] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +0.058872] kauditd_printk_skb: 158 callbacks suppressed
	[ +14.036461] kauditd_printk_skb: 59 callbacks suppressed
	[Jul19 19:35] kauditd_printk_skb: 79 callbacks suppressed
	[Jul19 19:39] systemd-fstab-generator[3635]: Ignoring "noauto" option for root device
	[  +0.065647] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.991733] systemd-fstab-generator[3961]: Ignoring "noauto" option for root device
	[  +0.088852] kauditd_printk_skb: 55 callbacks suppressed
	[Jul19 19:40] systemd-fstab-generator[4157]: Ignoring "noauto" option for root device
	[  +0.094853] kauditd_printk_skb: 12 callbacks suppressed
	[Jul19 19:41] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [7ad1f5dc72ccb7b8f8d8357d5c9d23d139367b15fbb8a70a0e82874d09f9d1dd] <==
	{"level":"info","ts":"2024-07-19T19:39:56.350694Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.104:2379"}
	{"level":"info","ts":"2024-07-19T19:39:56.360105Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"31eeb21fffaecb88","local-member-id":"50db06592c6308e8","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T19:39:56.360286Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T19:39:56.361999Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T19:39:56.366988Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T19:39:56.367043Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-19T19:39:56.367892Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-19T19:49:56.428526Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":679}
	{"level":"info","ts":"2024-07-19T19:49:56.438668Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":679,"took":"9.735453ms","hash":2627970385,"current-db-size-bytes":2174976,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2174976,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-07-19T19:49:56.438736Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2627970385,"revision":679,"compact-revision":-1}
	{"level":"info","ts":"2024-07-19T19:54:56.439103Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":921}
	{"level":"info","ts":"2024-07-19T19:54:56.443153Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":921,"took":"3.480011ms","hash":1574504579,"current-db-size-bytes":2174976,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1531904,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-07-19T19:54:56.443265Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1574504579,"revision":921,"compact-revision":679}
	{"level":"warn","ts":"2024-07-19T19:55:19.987808Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.152259ms","expected-duration":"100ms","prefix":"","request":"header:<ID:641922154930550862 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.72.104\" mod_revision:1177 > success:<request_put:<key:\"/registry/masterleases/192.168.72.104\" value_size:67 lease:641922154930550860 >> failure:<request_range:<key:\"/registry/masterleases/192.168.72.104\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-19T19:55:19.988567Z","caller":"traceutil/trace.go:171","msg":"trace[1918624741] transaction","detail":"{read_only:false; response_revision:1185; number_of_response:1; }","duration":"260.093658ms","start":"2024-07-19T19:55:19.72844Z","end":"2024-07-19T19:55:19.988534Z","steps":["trace[1918624741] 'process raft request'  (duration: 128.549411ms)","trace[1918624741] 'compare'  (duration: 128.825108ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T19:55:45.39505Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.821548ms","expected-duration":"100ms","prefix":"","request":"header:<ID:641922154930550991 > lease_revoke:<id:08e890cc8216b881>","response":"size:27"}
	{"level":"info","ts":"2024-07-19T19:56:09.09645Z","caller":"traceutil/trace.go:171","msg":"trace[1050373232] transaction","detail":"{read_only:false; response_revision:1225; number_of_response:1; }","duration":"212.132663ms","start":"2024-07-19T19:56:08.884274Z","end":"2024-07-19T19:56:09.096407Z","steps":["trace[1050373232] 'process raft request'  (duration: 211.646932ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T19:56:09.969045Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.142044ms","expected-duration":"100ms","prefix":"","request":"header:<ID:641922154930551110 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.72.104\" mod_revision:1218 > success:<request_put:<key:\"/registry/masterleases/192.168.72.104\" value_size:67 lease:641922154930551108 >> failure:<request_range:<key:\"/registry/masterleases/192.168.72.104\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-19T19:56:09.969147Z","caller":"traceutil/trace.go:171","msg":"trace[477167970] transaction","detail":"{read_only:false; response_revision:1226; number_of_response:1; }","duration":"208.875268ms","start":"2024-07-19T19:56:09.760242Z","end":"2024-07-19T19:56:09.969117Z","steps":["trace[477167970] 'process raft request'  (duration: 76.503429ms)","trace[477167970] 'compare'  (duration: 132.041776ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T19:56:10.234448Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.162747ms","expected-duration":"100ms","prefix":"","request":"header:<ID:641922154930551114 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-578533\" mod_revision:1219 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-578533\" value_size:531 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-578533\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-19T19:56:10.23527Z","caller":"traceutil/trace.go:171","msg":"trace[1982173480] linearizableReadLoop","detail":"{readStateIndex:1438; appliedIndex:1437; }","duration":"155.335566ms","start":"2024-07-19T19:56:10.079917Z","end":"2024-07-19T19:56:10.235252Z","steps":["trace[1982173480] 'read index received'  (duration: 26.338356ms)","trace[1982173480] 'applied index is now lower than readState.Index'  (duration: 128.995766ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T19:56:10.235502Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.561134ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-19T19:56:10.236405Z","caller":"traceutil/trace.go:171","msg":"trace[960605169] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1227; }","duration":"156.496498ms","start":"2024-07-19T19:56:10.079885Z","end":"2024-07-19T19:56:10.236381Z","steps":["trace[960605169] 'agreement among raft nodes before linearized reading'  (duration: 155.560397ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T19:56:10.235546Z","caller":"traceutil/trace.go:171","msg":"trace[1705608504] transaction","detail":"{read_only:false; response_revision:1227; number_of_response:1; }","duration":"166.21454ms","start":"2024-07-19T19:56:10.069317Z","end":"2024-07-19T19:56:10.235532Z","steps":["trace[1705608504] 'process raft request'  (duration: 36.90582ms)","trace[1705608504] 'compare'  (duration: 127.891181ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T19:56:29.313343Z","caller":"traceutil/trace.go:171","msg":"trace[880579013] transaction","detail":"{read_only:false; response_revision:1242; number_of_response:1; }","duration":"109.456331ms","start":"2024-07-19T19:56:29.203867Z","end":"2024-07-19T19:56:29.313323Z","steps":["trace[880579013] 'process raft request'  (duration: 109.32244ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:56:30 up 22 min,  0 users,  load average: 0.18, 0.14, 0.10
	Linux default-k8s-diff-port-578533 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6dab364e723e9421448ef1ff573d84fe92df839a75a1133f07bf2a84ebd7950e] <==
	W0719 19:39:50.945200       1 logging.go:59] [core] [Channel #86 SubChannel #87] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.108552       1 logging.go:59] [core] [Channel #83 SubChannel #84] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.242043       1 logging.go:59] [core] [Channel #13 SubChannel #14] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.254190       1 logging.go:59] [core] [Channel #95 SubChannel #96] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.256676       1 logging.go:59] [core] [Channel #89 SubChannel #90] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.268803       1 logging.go:59] [core] [Channel #59 SubChannel #60] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.274398       1 logging.go:59] [core] [Channel #17 SubChannel #18] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.325629       1 logging.go:59] [core] [Channel #101 SubChannel #102] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.354943       1 logging.go:59] [core] [Channel #56 SubChannel #57] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.379732       1 logging.go:59] [core] [Channel #140 SubChannel #141] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.442333       1 logging.go:59] [core] [Channel #119 SubChannel #120] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.475675       1 logging.go:59] [core] [Channel #137 SubChannel #138] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.536196       1 logging.go:59] [core] [Channel #77 SubChannel #78] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.567131       1 logging.go:59] [core] [Channel #32 SubChannel #33] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.610372       1 logging.go:59] [core] [Channel #44 SubChannel #45] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.664490       1 logging.go:59] [core] [Channel #47 SubChannel #48] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.693419       1 logging.go:59] [core] [Channel #80 SubChannel #81] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.771407       1 logging.go:59] [core] [Channel #161 SubChannel #162] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.785053       1 logging.go:59] [core] [Channel #104 SubChannel #105] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.814033       1 logging.go:59] [core] [Channel #110 SubChannel #111] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.856264       1 logging.go:59] [core] [Channel #155 SubChannel #156] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:51.921825       1 logging.go:59] [core] [Channel #20 SubChannel #21] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:52.138771       1 logging.go:59] [core] [Channel #143 SubChannel #144] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:52.203783       1 logging.go:59] [core] [Channel #8 SubChannel #9] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:39:52.366474       1 logging.go:59] [core] [Channel #65 SubChannel #66] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [bd8e28deb2f51e6c4ed6ca939ab870a348addc9d1173cef06f05da3c3df20a15] <==
	I0719 19:50:58.995183       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 19:52:58.994263       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 19:52:58.994360       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0719 19:52:58.994390       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 19:52:58.995588       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 19:52:58.995727       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0719 19:52:58.995756       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 19:54:57.999908       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 19:54:58.000074       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0719 19:54:59.001321       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 19:54:59.001497       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0719 19:54:59.001556       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 19:54:59.001679       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 19:54:59.001747       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0719 19:54:59.002938       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 19:55:59.002688       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 19:55:59.002943       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0719 19:55:59.003056       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 19:55:59.003159       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 19:55:59.003196       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0719 19:55:59.004174       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [b342b0e80323559300909a4db3e718e4bd038e8b166c454b9a82aedb44323fbe] <==
	I0719 19:51:06.361081       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="302.775µs"
	E0719 19:51:13.544841       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:51:14.049246       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0719 19:51:19.355154       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="142.739µs"
	E0719 19:51:43.549922       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:51:44.057393       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:52:13.555881       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:52:14.065173       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:52:43.560570       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:52:44.073604       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:53:13.568194       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:53:14.082576       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:53:43.573296       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:53:44.092570       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:54:13.578508       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:54:14.103359       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:54:43.583673       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:54:44.111631       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:55:13.591754       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:55:14.119720       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:55:43.597777       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:55:44.128702       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:56:13.606814       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 19:56:14.140429       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0719 19:56:17.362297       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="382.216µs"
	
	
	==> kube-proxy [27c5137e81ce4a3a4314ee9abb74c9b80eee504b7940b5261240bff2ecf0ea94] <==
	I0719 19:40:14.949084       1 server_linux.go:69] "Using iptables proxy"
	I0719 19:40:14.988650       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.104"]
	I0719 19:40:15.090504       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 19:40:15.090559       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 19:40:15.090575       1 server_linux.go:165] "Using iptables Proxier"
	I0719 19:40:15.094100       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 19:40:15.094324       1 server.go:872] "Version info" version="v1.30.3"
	I0719 19:40:15.094352       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 19:40:15.098444       1 config.go:192] "Starting service config controller"
	I0719 19:40:15.098478       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 19:40:15.098504       1 config.go:101] "Starting endpoint slice config controller"
	I0719 19:40:15.098508       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 19:40:15.098924       1 config.go:319] "Starting node config controller"
	I0719 19:40:15.098996       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 19:40:15.200073       1 shared_informer.go:320] Caches are synced for node config
	I0719 19:40:15.200124       1 shared_informer.go:320] Caches are synced for service config
	I0719 19:40:15.200164       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ec40058eeac298f291cf9203f510fbc71ccdacc8f4d31a5b3fc25f3c4a6d9b4d] <==
	W0719 19:39:58.018845       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0719 19:39:58.018880       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0719 19:39:58.846578       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0719 19:39:58.846711       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0719 19:39:58.847366       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 19:39:58.847408       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0719 19:39:58.889235       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0719 19:39:58.889280       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0719 19:39:59.007432       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 19:39:59.007482       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 19:39:59.028369       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 19:39:59.028410       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 19:39:59.036392       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0719 19:39:59.036555       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0719 19:39:59.097637       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 19:39:59.097680       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 19:39:59.205194       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0719 19:39:59.205238       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0719 19:39:59.288866       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 19:39:59.288993       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0719 19:39:59.290013       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0719 19:39:59.290549       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0719 19:39:59.296918       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 19:39:59.297048       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0719 19:40:01.010486       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 19 19:54:07 default-k8s-diff-port-578533 kubelet[3968]: E0719 19:54:07.342473    3968 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kz2bz" podUID="dc109926-e072-47e1-944c-a9614741fdaa"
	Jul 19 19:54:18 default-k8s-diff-port-578533 kubelet[3968]: E0719 19:54:18.343318    3968 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kz2bz" podUID="dc109926-e072-47e1-944c-a9614741fdaa"
	Jul 19 19:54:33 default-k8s-diff-port-578533 kubelet[3968]: E0719 19:54:33.342304    3968 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kz2bz" podUID="dc109926-e072-47e1-944c-a9614741fdaa"
	Jul 19 19:54:46 default-k8s-diff-port-578533 kubelet[3968]: E0719 19:54:46.341670    3968 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kz2bz" podUID="dc109926-e072-47e1-944c-a9614741fdaa"
	Jul 19 19:54:58 default-k8s-diff-port-578533 kubelet[3968]: E0719 19:54:58.341686    3968 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kz2bz" podUID="dc109926-e072-47e1-944c-a9614741fdaa"
	Jul 19 19:55:00 default-k8s-diff-port-578533 kubelet[3968]: E0719 19:55:00.356103    3968 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 19:55:00 default-k8s-diff-port-578533 kubelet[3968]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 19:55:00 default-k8s-diff-port-578533 kubelet[3968]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 19:55:00 default-k8s-diff-port-578533 kubelet[3968]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 19:55:00 default-k8s-diff-port-578533 kubelet[3968]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 19:55:12 default-k8s-diff-port-578533 kubelet[3968]: E0719 19:55:12.344209    3968 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kz2bz" podUID="dc109926-e072-47e1-944c-a9614741fdaa"
	Jul 19 19:55:25 default-k8s-diff-port-578533 kubelet[3968]: E0719 19:55:25.342914    3968 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kz2bz" podUID="dc109926-e072-47e1-944c-a9614741fdaa"
	Jul 19 19:55:40 default-k8s-diff-port-578533 kubelet[3968]: E0719 19:55:40.344587    3968 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kz2bz" podUID="dc109926-e072-47e1-944c-a9614741fdaa"
	Jul 19 19:55:51 default-k8s-diff-port-578533 kubelet[3968]: E0719 19:55:51.342735    3968 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kz2bz" podUID="dc109926-e072-47e1-944c-a9614741fdaa"
	Jul 19 19:56:00 default-k8s-diff-port-578533 kubelet[3968]: E0719 19:56:00.357802    3968 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 19:56:00 default-k8s-diff-port-578533 kubelet[3968]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 19:56:00 default-k8s-diff-port-578533 kubelet[3968]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 19:56:00 default-k8s-diff-port-578533 kubelet[3968]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 19:56:00 default-k8s-diff-port-578533 kubelet[3968]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 19:56:02 default-k8s-diff-port-578533 kubelet[3968]: E0719 19:56:02.362549    3968 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 19 19:56:02 default-k8s-diff-port-578533 kubelet[3968]: E0719 19:56:02.362646    3968 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 19 19:56:02 default-k8s-diff-port-578533 kubelet[3968]: E0719 19:56:02.362930    3968 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zkbqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathE
xpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdi
nOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-kz2bz_kube-system(dc109926-e072-47e1-944c-a9614741fdaa): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 19 19:56:02 default-k8s-diff-port-578533 kubelet[3968]: E0719 19:56:02.363056    3968 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-kz2bz" podUID="dc109926-e072-47e1-944c-a9614741fdaa"
	Jul 19 19:56:17 default-k8s-diff-port-578533 kubelet[3968]: E0719 19:56:17.342471    3968 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kz2bz" podUID="dc109926-e072-47e1-944c-a9614741fdaa"
	Jul 19 19:56:30 default-k8s-diff-port-578533 kubelet[3968]: E0719 19:56:30.347811    3968 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-kz2bz" podUID="dc109926-e072-47e1-944c-a9614741fdaa"
	
	
	==> storage-provisioner [d1c7e821430f2c3e648dee20b159498d81e180c353242de20e2e6bd1f2a1c2c3] <==
	I0719 19:40:15.945666       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 19:40:15.961819       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 19:40:15.962208       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0719 19:40:15.978871       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0719 19:40:15.979075       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-578533_6a5f46a5-2c1c-4591-b583-5f35d471401b!
	I0719 19:40:15.979671       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"03b44dc7-0ac0-41e3-b227-e533b537f886", APIVersion:"v1", ResourceVersion:"407", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-578533_6a5f46a5-2c1c-4591-b583-5f35d471401b became leader
	I0719 19:40:16.080264       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-578533_6a5f46a5-2c1c-4591-b583-5f35d471401b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-578533 -n default-k8s-diff-port-578533
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-578533 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-kz2bz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-578533 describe pod metrics-server-569cc877fc-kz2bz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-578533 describe pod metrics-server-569cc877fc-kz2bz: exit status 1 (81.381658ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-kz2bz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-578533 describe pod metrics-server-569cc877fc-kz2bz: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (430.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (312.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-971041 -n no-preload-971041
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-19 19:54:56.638149618 +0000 UTC m=+6070.901912600
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-971041 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-971041 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.733µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-971041 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-971041 -n no-preload-971041
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-971041 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-971041 logs -n 25: (1.2034035s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p pause-666146                                        | pause-666146                 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| unpause | -p pause-666146                                        | pause-666146                 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| pause   | -p pause-666146                                        | pause-666146                 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-666146                                        | pause-666146                 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-666146                                        | pause-666146                 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	| delete  | -p                                                     | disable-driver-mounts-994300 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | disable-driver-mounts-994300                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:27 UTC |
	|         | default-k8s-diff-port-578533                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-971041             | no-preload-971041            | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-971041                                   | no-preload-971041            | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-578533  | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:27 UTC | 19 Jul 24 19:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:27 UTC |                     |
	|         | default-k8s-diff-port-578533                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-889918            | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:27 UTC | 19 Jul 24 19:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-889918                                  | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-971041                  | no-preload-971041            | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-971041 --memory=2200                     | no-preload-971041            | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC | 19 Jul 24 19:40 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-570369        | old-k8s-version-570369       | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-578533       | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-889918                 | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:30 UTC | 19 Jul 24 19:40 UTC |
	|         | default-k8s-diff-port-578533                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-889918                                  | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:30 UTC | 19 Jul 24 19:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-570369                              | old-k8s-version-570369       | jenkins | v1.33.1 | 19 Jul 24 19:30 UTC | 19 Jul 24 19:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-570369             | old-k8s-version-570369       | jenkins | v1.33.1 | 19 Jul 24 19:31 UTC | 19 Jul 24 19:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-570369                              | old-k8s-version-570369       | jenkins | v1.33.1 | 19 Jul 24 19:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-570369                              | old-k8s-version-570369       | jenkins | v1.33.1 | 19 Jul 24 19:54 UTC | 19 Jul 24 19:54 UTC |
	| start   | -p newest-cni-229943 --memory=2200 --alsologtostderr   | newest-cni-229943            | jenkins | v1.33.1 | 19 Jul 24 19:54 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 19:54:51
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 19:54:51.061991   75258 out.go:291] Setting OutFile to fd 1 ...
	I0719 19:54:51.062101   75258 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:54:51.062110   75258 out.go:304] Setting ErrFile to fd 2...
	I0719 19:54:51.062115   75258 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:54:51.062369   75258 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 19:54:51.062954   75258 out.go:298] Setting JSON to false
	I0719 19:54:51.064116   75258 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":9434,"bootTime":1721409457,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 19:54:51.064194   75258 start.go:139] virtualization: kvm guest
	I0719 19:54:51.066257   75258 out.go:177] * [newest-cni-229943] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 19:54:51.067815   75258 notify.go:220] Checking for updates...
	I0719 19:54:51.067855   75258 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 19:54:51.069184   75258 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 19:54:51.070392   75258 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:54:51.071468   75258 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 19:54:51.072664   75258 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 19:54:51.073737   75258 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 19:54:51.075485   75258 config.go:182] Loaded profile config "default-k8s-diff-port-578533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:54:51.075635   75258 config.go:182] Loaded profile config "embed-certs-889918": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:54:51.075764   75258 config.go:182] Loaded profile config "no-preload-971041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 19:54:51.075884   75258 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 19:54:51.114117   75258 out.go:177] * Using the kvm2 driver based on user configuration
	I0719 19:54:51.115311   75258 start.go:297] selected driver: kvm2
	I0719 19:54:51.115334   75258 start.go:901] validating driver "kvm2" against <nil>
	I0719 19:54:51.115348   75258 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 19:54:51.116144   75258 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 19:54:51.116214   75258 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19307-14841/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 19:54:51.131792   75258 install.go:137] /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0719 19:54:51.131872   75258 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0719 19:54:51.131911   75258 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0719 19:54:51.132168   75258 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0719 19:54:51.132226   75258 cni.go:84] Creating CNI manager for ""
	I0719 19:54:51.132236   75258 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:54:51.132245   75258 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 19:54:51.132313   75258 start.go:340] cluster config:
	{Name:newest-cni-229943 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-229943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:54:51.132418   75258 iso.go:125] acquiring lock: {Name:mka126a3710ab8a115eb2a3299b735524430e5f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 19:54:51.133840   75258 out.go:177] * Starting "newest-cni-229943" primary control-plane node in "newest-cni-229943" cluster
	I0719 19:54:51.134949   75258 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0719 19:54:51.134989   75258 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0719 19:54:51.135000   75258 cache.go:56] Caching tarball of preloaded images
	I0719 19:54:51.135069   75258 preload.go:172] Found /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 19:54:51.135081   75258 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0719 19:54:51.135186   75258 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943/config.json ...
	I0719 19:54:51.135205   75258 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/newest-cni-229943/config.json: {Name:mk57a0b0495e048f2082e5fe11d1cbf229aad41c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:54:51.135368   75258 start.go:360] acquireMachinesLock for newest-cni-229943: {Name:mk45aeee801587237bb965fa260501e9fac6f233 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 19:54:51.135400   75258 start.go:364] duration metric: took 18.197µs to acquireMachinesLock for "newest-cni-229943"
	I0719 19:54:51.135419   75258 start.go:93] Provisioning new machine with config: &{Name:newest-cni-229943 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-229943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 19:54:51.135485   75258 start.go:125] createHost starting for "" (driver="kvm2")
	I0719 19:54:51.136875   75258 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 19:54:51.137007   75258 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:54:51.137037   75258 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:54:51.151156   75258 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41279
	I0719 19:54:51.151622   75258 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:54:51.152212   75258 main.go:141] libmachine: Using API Version  1
	I0719 19:54:51.152237   75258 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:54:51.152658   75258 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:54:51.152864   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetMachineName
	I0719 19:54:51.152998   75258 main.go:141] libmachine: (newest-cni-229943) Calling .DriverName
	I0719 19:54:51.153168   75258 start.go:159] libmachine.API.Create for "newest-cni-229943" (driver="kvm2")
	I0719 19:54:51.153212   75258 client.go:168] LocalClient.Create starting
	I0719 19:54:51.153246   75258 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem
	I0719 19:54:51.153284   75258 main.go:141] libmachine: Decoding PEM data...
	I0719 19:54:51.153319   75258 main.go:141] libmachine: Parsing certificate...
	I0719 19:54:51.153383   75258 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem
	I0719 19:54:51.153402   75258 main.go:141] libmachine: Decoding PEM data...
	I0719 19:54:51.153412   75258 main.go:141] libmachine: Parsing certificate...
	I0719 19:54:51.153427   75258 main.go:141] libmachine: Running pre-create checks...
	I0719 19:54:51.153435   75258 main.go:141] libmachine: (newest-cni-229943) Calling .PreCreateCheck
	I0719 19:54:51.153812   75258 main.go:141] libmachine: (newest-cni-229943) Calling .GetConfigRaw
	I0719 19:54:51.154213   75258 main.go:141] libmachine: Creating machine...
	I0719 19:54:51.154227   75258 main.go:141] libmachine: (newest-cni-229943) Calling .Create
	I0719 19:54:51.154366   75258 main.go:141] libmachine: (newest-cni-229943) Creating KVM machine...
	I0719 19:54:51.155642   75258 main.go:141] libmachine: (newest-cni-229943) DBG | found existing default KVM network
	I0719 19:54:51.157325   75258 main.go:141] libmachine: (newest-cni-229943) DBG | I0719 19:54:51.157180   75281 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00026e0e0}
	I0719 19:54:51.157349   75258 main.go:141] libmachine: (newest-cni-229943) DBG | created network xml: 
	I0719 19:54:51.157360   75258 main.go:141] libmachine: (newest-cni-229943) DBG | <network>
	I0719 19:54:51.157368   75258 main.go:141] libmachine: (newest-cni-229943) DBG |   <name>mk-newest-cni-229943</name>
	I0719 19:54:51.157381   75258 main.go:141] libmachine: (newest-cni-229943) DBG |   <dns enable='no'/>
	I0719 19:54:51.157388   75258 main.go:141] libmachine: (newest-cni-229943) DBG |   
	I0719 19:54:51.157398   75258 main.go:141] libmachine: (newest-cni-229943) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0719 19:54:51.157407   75258 main.go:141] libmachine: (newest-cni-229943) DBG |     <dhcp>
	I0719 19:54:51.157415   75258 main.go:141] libmachine: (newest-cni-229943) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0719 19:54:51.157422   75258 main.go:141] libmachine: (newest-cni-229943) DBG |     </dhcp>
	I0719 19:54:51.157428   75258 main.go:141] libmachine: (newest-cni-229943) DBG |   </ip>
	I0719 19:54:51.157438   75258 main.go:141] libmachine: (newest-cni-229943) DBG |   
	I0719 19:54:51.157462   75258 main.go:141] libmachine: (newest-cni-229943) DBG | </network>
	I0719 19:54:51.157494   75258 main.go:141] libmachine: (newest-cni-229943) DBG | 
	I0719 19:54:51.162458   75258 main.go:141] libmachine: (newest-cni-229943) DBG | trying to create private KVM network mk-newest-cni-229943 192.168.39.0/24...
	I0719 19:54:51.236855   75258 main.go:141] libmachine: (newest-cni-229943) Setting up store path in /home/jenkins/minikube-integration/19307-14841/.minikube/machines/newest-cni-229943 ...
	I0719 19:54:51.236891   75258 main.go:141] libmachine: (newest-cni-229943) DBG | private KVM network mk-newest-cni-229943 192.168.39.0/24 created
	I0719 19:54:51.236905   75258 main.go:141] libmachine: (newest-cni-229943) Building disk image from file:///home/jenkins/minikube-integration/19307-14841/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 19:54:51.236929   75258 main.go:141] libmachine: (newest-cni-229943) Downloading /home/jenkins/minikube-integration/19307-14841/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19307-14841/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 19:54:51.236952   75258 main.go:141] libmachine: (newest-cni-229943) DBG | I0719 19:54:51.236771   75281 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 19:54:51.472733   75258 main.go:141] libmachine: (newest-cni-229943) DBG | I0719 19:54:51.472570   75281 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/newest-cni-229943/id_rsa...
	I0719 19:54:51.619611   75258 main.go:141] libmachine: (newest-cni-229943) DBG | I0719 19:54:51.619442   75281 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/newest-cni-229943/newest-cni-229943.rawdisk...
	I0719 19:54:51.619655   75258 main.go:141] libmachine: (newest-cni-229943) DBG | Writing magic tar header
	I0719 19:54:51.619689   75258 main.go:141] libmachine: (newest-cni-229943) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841/.minikube/machines/newest-cni-229943 (perms=drwx------)
	I0719 19:54:51.619706   75258 main.go:141] libmachine: (newest-cni-229943) DBG | Writing SSH key tar header
	I0719 19:54:51.619718   75258 main.go:141] libmachine: (newest-cni-229943) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841/.minikube/machines (perms=drwxr-xr-x)
	I0719 19:54:51.619739   75258 main.go:141] libmachine: (newest-cni-229943) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841/.minikube (perms=drwxr-xr-x)
	I0719 19:54:51.619757   75258 main.go:141] libmachine: (newest-cni-229943) Setting executable bit set on /home/jenkins/minikube-integration/19307-14841 (perms=drwxrwxr-x)
	I0719 19:54:51.619774   75258 main.go:141] libmachine: (newest-cni-229943) DBG | I0719 19:54:51.619560   75281 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19307-14841/.minikube/machines/newest-cni-229943 ...
	I0719 19:54:51.619794   75258 main.go:141] libmachine: (newest-cni-229943) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/newest-cni-229943
	I0719 19:54:51.619806   75258 main.go:141] libmachine: (newest-cni-229943) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841/.minikube/machines
	I0719 19:54:51.619820   75258 main.go:141] libmachine: (newest-cni-229943) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0719 19:54:51.619847   75258 main.go:141] libmachine: (newest-cni-229943) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0719 19:54:51.619859   75258 main.go:141] libmachine: (newest-cni-229943) Creating domain...
	I0719 19:54:51.619870   75258 main.go:141] libmachine: (newest-cni-229943) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 19:54:51.619902   75258 main.go:141] libmachine: (newest-cni-229943) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19307-14841
	I0719 19:54:51.619916   75258 main.go:141] libmachine: (newest-cni-229943) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0719 19:54:51.619928   75258 main.go:141] libmachine: (newest-cni-229943) DBG | Checking permissions on dir: /home/jenkins
	I0719 19:54:51.619935   75258 main.go:141] libmachine: (newest-cni-229943) DBG | Checking permissions on dir: /home
	I0719 19:54:51.619947   75258 main.go:141] libmachine: (newest-cni-229943) DBG | Skipping /home - not owner
	I0719 19:54:51.622066   75258 main.go:141] libmachine: (newest-cni-229943) define libvirt domain using xml: 
	I0719 19:54:51.622090   75258 main.go:141] libmachine: (newest-cni-229943) <domain type='kvm'>
	I0719 19:54:51.622106   75258 main.go:141] libmachine: (newest-cni-229943)   <name>newest-cni-229943</name>
	I0719 19:54:51.622113   75258 main.go:141] libmachine: (newest-cni-229943)   <memory unit='MiB'>2200</memory>
	I0719 19:54:51.622122   75258 main.go:141] libmachine: (newest-cni-229943)   <vcpu>2</vcpu>
	I0719 19:54:51.622133   75258 main.go:141] libmachine: (newest-cni-229943)   <features>
	I0719 19:54:51.622143   75258 main.go:141] libmachine: (newest-cni-229943)     <acpi/>
	I0719 19:54:51.622153   75258 main.go:141] libmachine: (newest-cni-229943)     <apic/>
	I0719 19:54:51.622164   75258 main.go:141] libmachine: (newest-cni-229943)     <pae/>
	I0719 19:54:51.622175   75258 main.go:141] libmachine: (newest-cni-229943)     
	I0719 19:54:51.622203   75258 main.go:141] libmachine: (newest-cni-229943)   </features>
	I0719 19:54:51.622225   75258 main.go:141] libmachine: (newest-cni-229943)   <cpu mode='host-passthrough'>
	I0719 19:54:51.622232   75258 main.go:141] libmachine: (newest-cni-229943)   
	I0719 19:54:51.622247   75258 main.go:141] libmachine: (newest-cni-229943)   </cpu>
	I0719 19:54:51.622255   75258 main.go:141] libmachine: (newest-cni-229943)   <os>
	I0719 19:54:51.622260   75258 main.go:141] libmachine: (newest-cni-229943)     <type>hvm</type>
	I0719 19:54:51.622267   75258 main.go:141] libmachine: (newest-cni-229943)     <boot dev='cdrom'/>
	I0719 19:54:51.622275   75258 main.go:141] libmachine: (newest-cni-229943)     <boot dev='hd'/>
	I0719 19:54:51.622283   75258 main.go:141] libmachine: (newest-cni-229943)     <bootmenu enable='no'/>
	I0719 19:54:51.622290   75258 main.go:141] libmachine: (newest-cni-229943)   </os>
	I0719 19:54:51.622296   75258 main.go:141] libmachine: (newest-cni-229943)   <devices>
	I0719 19:54:51.622303   75258 main.go:141] libmachine: (newest-cni-229943)     <disk type='file' device='cdrom'>
	I0719 19:54:51.622311   75258 main.go:141] libmachine: (newest-cni-229943)       <source file='/home/jenkins/minikube-integration/19307-14841/.minikube/machines/newest-cni-229943/boot2docker.iso'/>
	I0719 19:54:51.622321   75258 main.go:141] libmachine: (newest-cni-229943)       <target dev='hdc' bus='scsi'/>
	I0719 19:54:51.622327   75258 main.go:141] libmachine: (newest-cni-229943)       <readonly/>
	I0719 19:54:51.622334   75258 main.go:141] libmachine: (newest-cni-229943)     </disk>
	I0719 19:54:51.622343   75258 main.go:141] libmachine: (newest-cni-229943)     <disk type='file' device='disk'>
	I0719 19:54:51.622354   75258 main.go:141] libmachine: (newest-cni-229943)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0719 19:54:51.622364   75258 main.go:141] libmachine: (newest-cni-229943)       <source file='/home/jenkins/minikube-integration/19307-14841/.minikube/machines/newest-cni-229943/newest-cni-229943.rawdisk'/>
	I0719 19:54:51.622371   75258 main.go:141] libmachine: (newest-cni-229943)       <target dev='hda' bus='virtio'/>
	I0719 19:54:51.622376   75258 main.go:141] libmachine: (newest-cni-229943)     </disk>
	I0719 19:54:51.622383   75258 main.go:141] libmachine: (newest-cni-229943)     <interface type='network'>
	I0719 19:54:51.622389   75258 main.go:141] libmachine: (newest-cni-229943)       <source network='mk-newest-cni-229943'/>
	I0719 19:54:51.622396   75258 main.go:141] libmachine: (newest-cni-229943)       <model type='virtio'/>
	I0719 19:54:51.622401   75258 main.go:141] libmachine: (newest-cni-229943)     </interface>
	I0719 19:54:51.622410   75258 main.go:141] libmachine: (newest-cni-229943)     <interface type='network'>
	I0719 19:54:51.622416   75258 main.go:141] libmachine: (newest-cni-229943)       <source network='default'/>
	I0719 19:54:51.622422   75258 main.go:141] libmachine: (newest-cni-229943)       <model type='virtio'/>
	I0719 19:54:51.622455   75258 main.go:141] libmachine: (newest-cni-229943)     </interface>
	I0719 19:54:51.622482   75258 main.go:141] libmachine: (newest-cni-229943)     <serial type='pty'>
	I0719 19:54:51.622505   75258 main.go:141] libmachine: (newest-cni-229943)       <target port='0'/>
	I0719 19:54:51.622519   75258 main.go:141] libmachine: (newest-cni-229943)     </serial>
	I0719 19:54:51.622529   75258 main.go:141] libmachine: (newest-cni-229943)     <console type='pty'>
	I0719 19:54:51.622541   75258 main.go:141] libmachine: (newest-cni-229943)       <target type='serial' port='0'/>
	I0719 19:54:51.622553   75258 main.go:141] libmachine: (newest-cni-229943)     </console>
	I0719 19:54:51.622568   75258 main.go:141] libmachine: (newest-cni-229943)     <rng model='virtio'>
	I0719 19:54:51.622580   75258 main.go:141] libmachine: (newest-cni-229943)       <backend model='random'>/dev/random</backend>
	I0719 19:54:51.622590   75258 main.go:141] libmachine: (newest-cni-229943)     </rng>
	I0719 19:54:51.622610   75258 main.go:141] libmachine: (newest-cni-229943)     
	I0719 19:54:51.622626   75258 main.go:141] libmachine: (newest-cni-229943)     
	I0719 19:54:51.622631   75258 main.go:141] libmachine: (newest-cni-229943)   </devices>
	I0719 19:54:51.622648   75258 main.go:141] libmachine: (newest-cni-229943) </domain>
	I0719 19:54:51.622658   75258 main.go:141] libmachine: (newest-cni-229943) 
	I0719 19:54:51.626859   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:6d:be:22 in network default
	I0719 19:54:51.627438   75258 main.go:141] libmachine: (newest-cni-229943) Ensuring networks are active...
	I0719 19:54:51.627455   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:54:51.628134   75258 main.go:141] libmachine: (newest-cni-229943) Ensuring network default is active
	I0719 19:54:51.628361   75258 main.go:141] libmachine: (newest-cni-229943) Ensuring network mk-newest-cni-229943 is active
	I0719 19:54:51.628847   75258 main.go:141] libmachine: (newest-cni-229943) Getting domain xml...
	I0719 19:54:51.629501   75258 main.go:141] libmachine: (newest-cni-229943) Creating domain...
	I0719 19:54:52.884492   75258 main.go:141] libmachine: (newest-cni-229943) Waiting to get IP...
	I0719 19:54:52.885421   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:54:52.885892   75258 main.go:141] libmachine: (newest-cni-229943) DBG | unable to find current IP address of domain newest-cni-229943 in network mk-newest-cni-229943
	I0719 19:54:52.885916   75258 main.go:141] libmachine: (newest-cni-229943) DBG | I0719 19:54:52.885860   75281 retry.go:31] will retry after 227.614248ms: waiting for machine to come up
	I0719 19:54:53.115411   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:54:53.115942   75258 main.go:141] libmachine: (newest-cni-229943) DBG | unable to find current IP address of domain newest-cni-229943 in network mk-newest-cni-229943
	I0719 19:54:53.115972   75258 main.go:141] libmachine: (newest-cni-229943) DBG | I0719 19:54:53.115899   75281 retry.go:31] will retry after 384.283511ms: waiting for machine to come up
	I0719 19:54:53.501414   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:54:53.501893   75258 main.go:141] libmachine: (newest-cni-229943) DBG | unable to find current IP address of domain newest-cni-229943 in network mk-newest-cni-229943
	I0719 19:54:53.501931   75258 main.go:141] libmachine: (newest-cni-229943) DBG | I0719 19:54:53.501843   75281 retry.go:31] will retry after 407.358364ms: waiting for machine to come up
	I0719 19:54:53.910474   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:54:53.910986   75258 main.go:141] libmachine: (newest-cni-229943) DBG | unable to find current IP address of domain newest-cni-229943 in network mk-newest-cni-229943
	I0719 19:54:53.911015   75258 main.go:141] libmachine: (newest-cni-229943) DBG | I0719 19:54:53.910941   75281 retry.go:31] will retry after 441.977844ms: waiting for machine to come up
	I0719 19:54:54.354632   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:54:54.355063   75258 main.go:141] libmachine: (newest-cni-229943) DBG | unable to find current IP address of domain newest-cni-229943 in network mk-newest-cni-229943
	I0719 19:54:54.355081   75258 main.go:141] libmachine: (newest-cni-229943) DBG | I0719 19:54:54.355005   75281 retry.go:31] will retry after 673.464663ms: waiting for machine to come up
	I0719 19:54:55.029934   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:54:55.030323   75258 main.go:141] libmachine: (newest-cni-229943) DBG | unable to find current IP address of domain newest-cni-229943 in network mk-newest-cni-229943
	I0719 19:54:55.030357   75258 main.go:141] libmachine: (newest-cni-229943) DBG | I0719 19:54:55.030294   75281 retry.go:31] will retry after 857.33822ms: waiting for machine to come up
	I0719 19:54:55.889057   75258 main.go:141] libmachine: (newest-cni-229943) DBG | domain newest-cni-229943 has defined MAC address 52:54:00:e4:49:e9 in network mk-newest-cni-229943
	I0719 19:54:55.889607   75258 main.go:141] libmachine: (newest-cni-229943) DBG | unable to find current IP address of domain newest-cni-229943 in network mk-newest-cni-229943
	I0719 19:54:55.889640   75258 main.go:141] libmachine: (newest-cni-229943) DBG | I0719 19:54:55.889529   75281 retry.go:31] will retry after 1.102513755s: waiting for machine to come up
	
	
	==> CRI-O <==
	Jul 19 19:54:57 no-preload-971041 crio[727]: time="2024-07-19 19:54:57.200271911Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721418897200252680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0cc60cc3-beeb-47ae-8198-da5673de406e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:54:57 no-preload-971041 crio[727]: time="2024-07-19 19:54:57.201355530Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a4112c9e-56ff-4452-b150-7d7c619349ae name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:54:57 no-preload-971041 crio[727]: time="2024-07-19 19:54:57.201409051Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a4112c9e-56ff-4452-b150-7d7c619349ae name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:54:57 no-preload-971041 crio[727]: time="2024-07-19 19:54:57.201598189Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4026f272ae2afbd759a2eead22c7ba19769bf5996b34bc2a300786ea12071bf,PodSandboxId:09dac1725945b48e94edb63c465b1519ae04965dc88460932fc9f1b97dec7794,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721418032997486470,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fedb9d1a-139b-4627-8efa-64dd1481d8d4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bd028fbdc83b77e87fff78ccc73a5bbe9f449056bce3b02a01f07266aa177cd,PodSandboxId:b543f5028c7b46b286a2fcc1827b77f412d70c8a609dd9af447f9e3fbac92d11,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721418032909002159,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-7kdqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b0eeec5-f993-4d90-a3e0-5254bcc39f64,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:320e4f1c648899d65709fe2b59481bf622dac880587ad684553a7dfb62278509,PodSandboxId:112d13271d54bf2abe4d3b208c3c0bd17d9af1237ee2e3ca26d44d9648e711f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721418032760510232,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-d5xr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f
5a96c3-84ef-429c-b31d-472fcc247862,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143b1d70f2200d56bbf9db465436c00dc0c280f563a16bda424a5482a45e9b5,PodSandboxId:ec119dcaf5f52dd347a19075df89aab7fa8c99567508a8b165d90bbf0233ce7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721418031411794224,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-445wh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 468d052a-a8d2-47f6-9054-813dd3ee33ee,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d1093d162109f27dc8b8cdbda430ea0f0089db5cdc9582fbba6c9f62db6ebe0,PodSandboxId:27241a85a80e07d726d2e27488ff9698351c76e51cd6554bcca010f09b41c300,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721418020630468660,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 204ea3c3d8c5129c562f4a760458c051,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:007fed8712ef2f9d32924f70d3f28121bc2d8aaaf3f7a1073545252fe4aaab89,PodSandboxId:33f1d8ea18a503178d8c13dd23af9b4b5c4ce1846556a6bdca7716c512ca13ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721418020537823130,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb080b6fd04c15f65640b834bf6dffcd,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4bc3716261bce674e7f77a1d8e7872ff35c8557bed7653afde8c882698ffd8,PodSandboxId:010a590647984114e3de579b710f60bc66df7d6ff6460d989071c28426f9b882,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721418020526868452,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f08d7fe60cc6a65a21ef74a31a923766,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c630159ccd425562f552331464be0b4217958f3a1a9cc41403dcd4ea6840e2,PodSandboxId:38c2e5a5bf35048fc50640c4dd17109cbbde7ae04179abb0cc36112e9f94032a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721418020467174975,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb5f2073312f2003bff042479372294c,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49caf99a342cff7f9ed1c959f09529dd2f983f2268ec2b5947380d81ed812e95,PodSandboxId:f0e74b3e6aae6abed27d6900acb8d2f63d924020bd6fae3d87b118536bd8941f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721417734430001969,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f08d7fe60cc6a65a21ef74a31a923766,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a4112c9e-56ff-4452-b150-7d7c619349ae name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:54:57 no-preload-971041 crio[727]: time="2024-07-19 19:54:57.239948177Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=86c3037f-17a4-4354-b051-1ea4ae1be4ec name=/runtime.v1.RuntimeService/Version
	Jul 19 19:54:57 no-preload-971041 crio[727]: time="2024-07-19 19:54:57.240028640Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=86c3037f-17a4-4354-b051-1ea4ae1be4ec name=/runtime.v1.RuntimeService/Version
	Jul 19 19:54:57 no-preload-971041 crio[727]: time="2024-07-19 19:54:57.240856480Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8edf601e-240b-4b13-946b-50ecdc55df00 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:54:57 no-preload-971041 crio[727]: time="2024-07-19 19:54:57.241210292Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721418897241188914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8edf601e-240b-4b13-946b-50ecdc55df00 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:54:57 no-preload-971041 crio[727]: time="2024-07-19 19:54:57.241826897Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ff12649b-dc3e-4319-b310-b2483b21837a name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:54:57 no-preload-971041 crio[727]: time="2024-07-19 19:54:57.241881832Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ff12649b-dc3e-4319-b310-b2483b21837a name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:54:57 no-preload-971041 crio[727]: time="2024-07-19 19:54:57.242093351Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4026f272ae2afbd759a2eead22c7ba19769bf5996b34bc2a300786ea12071bf,PodSandboxId:09dac1725945b48e94edb63c465b1519ae04965dc88460932fc9f1b97dec7794,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721418032997486470,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fedb9d1a-139b-4627-8efa-64dd1481d8d4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bd028fbdc83b77e87fff78ccc73a5bbe9f449056bce3b02a01f07266aa177cd,PodSandboxId:b543f5028c7b46b286a2fcc1827b77f412d70c8a609dd9af447f9e3fbac92d11,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721418032909002159,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-7kdqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b0eeec5-f993-4d90-a3e0-5254bcc39f64,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:320e4f1c648899d65709fe2b59481bf622dac880587ad684553a7dfb62278509,PodSandboxId:112d13271d54bf2abe4d3b208c3c0bd17d9af1237ee2e3ca26d44d9648e711f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721418032760510232,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-d5xr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f
5a96c3-84ef-429c-b31d-472fcc247862,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143b1d70f2200d56bbf9db465436c00dc0c280f563a16bda424a5482a45e9b5,PodSandboxId:ec119dcaf5f52dd347a19075df89aab7fa8c99567508a8b165d90bbf0233ce7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721418031411794224,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-445wh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 468d052a-a8d2-47f6-9054-813dd3ee33ee,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d1093d162109f27dc8b8cdbda430ea0f0089db5cdc9582fbba6c9f62db6ebe0,PodSandboxId:27241a85a80e07d726d2e27488ff9698351c76e51cd6554bcca010f09b41c300,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721418020630468660,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 204ea3c3d8c5129c562f4a760458c051,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:007fed8712ef2f9d32924f70d3f28121bc2d8aaaf3f7a1073545252fe4aaab89,PodSandboxId:33f1d8ea18a503178d8c13dd23af9b4b5c4ce1846556a6bdca7716c512ca13ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721418020537823130,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb080b6fd04c15f65640b834bf6dffcd,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4bc3716261bce674e7f77a1d8e7872ff35c8557bed7653afde8c882698ffd8,PodSandboxId:010a590647984114e3de579b710f60bc66df7d6ff6460d989071c28426f9b882,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721418020526868452,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f08d7fe60cc6a65a21ef74a31a923766,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c630159ccd425562f552331464be0b4217958f3a1a9cc41403dcd4ea6840e2,PodSandboxId:38c2e5a5bf35048fc50640c4dd17109cbbde7ae04179abb0cc36112e9f94032a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721418020467174975,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb5f2073312f2003bff042479372294c,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49caf99a342cff7f9ed1c959f09529dd2f983f2268ec2b5947380d81ed812e95,PodSandboxId:f0e74b3e6aae6abed27d6900acb8d2f63d924020bd6fae3d87b118536bd8941f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721417734430001969,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f08d7fe60cc6a65a21ef74a31a923766,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ff12649b-dc3e-4319-b310-b2483b21837a name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:54:57 no-preload-971041 crio[727]: time="2024-07-19 19:54:57.278574536Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=411cf8c3-f176-4348-8f81-e6f3bc33423a name=/runtime.v1.RuntimeService/Version
	Jul 19 19:54:57 no-preload-971041 crio[727]: time="2024-07-19 19:54:57.278669066Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=411cf8c3-f176-4348-8f81-e6f3bc33423a name=/runtime.v1.RuntimeService/Version
	Jul 19 19:54:57 no-preload-971041 crio[727]: time="2024-07-19 19:54:57.280029644Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ce8bf6c3-a2d8-4d5d-a5dd-e3a73412df9b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:54:57 no-preload-971041 crio[727]: time="2024-07-19 19:54:57.280380128Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721418897280360844,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ce8bf6c3-a2d8-4d5d-a5dd-e3a73412df9b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:54:57 no-preload-971041 crio[727]: time="2024-07-19 19:54:57.280875793Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ec0c8702-3cde-4b1d-8a9c-913f3eca80b7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:54:57 no-preload-971041 crio[727]: time="2024-07-19 19:54:57.280922864Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ec0c8702-3cde-4b1d-8a9c-913f3eca80b7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:54:57 no-preload-971041 crio[727]: time="2024-07-19 19:54:57.281138504Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4026f272ae2afbd759a2eead22c7ba19769bf5996b34bc2a300786ea12071bf,PodSandboxId:09dac1725945b48e94edb63c465b1519ae04965dc88460932fc9f1b97dec7794,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721418032997486470,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fedb9d1a-139b-4627-8efa-64dd1481d8d4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bd028fbdc83b77e87fff78ccc73a5bbe9f449056bce3b02a01f07266aa177cd,PodSandboxId:b543f5028c7b46b286a2fcc1827b77f412d70c8a609dd9af447f9e3fbac92d11,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721418032909002159,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-7kdqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b0eeec5-f993-4d90-a3e0-5254bcc39f64,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:320e4f1c648899d65709fe2b59481bf622dac880587ad684553a7dfb62278509,PodSandboxId:112d13271d54bf2abe4d3b208c3c0bd17d9af1237ee2e3ca26d44d9648e711f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721418032760510232,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-d5xr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f
5a96c3-84ef-429c-b31d-472fcc247862,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143b1d70f2200d56bbf9db465436c00dc0c280f563a16bda424a5482a45e9b5,PodSandboxId:ec119dcaf5f52dd347a19075df89aab7fa8c99567508a8b165d90bbf0233ce7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721418031411794224,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-445wh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 468d052a-a8d2-47f6-9054-813dd3ee33ee,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d1093d162109f27dc8b8cdbda430ea0f0089db5cdc9582fbba6c9f62db6ebe0,PodSandboxId:27241a85a80e07d726d2e27488ff9698351c76e51cd6554bcca010f09b41c300,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721418020630468660,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 204ea3c3d8c5129c562f4a760458c051,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:007fed8712ef2f9d32924f70d3f28121bc2d8aaaf3f7a1073545252fe4aaab89,PodSandboxId:33f1d8ea18a503178d8c13dd23af9b4b5c4ce1846556a6bdca7716c512ca13ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721418020537823130,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb080b6fd04c15f65640b834bf6dffcd,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4bc3716261bce674e7f77a1d8e7872ff35c8557bed7653afde8c882698ffd8,PodSandboxId:010a590647984114e3de579b710f60bc66df7d6ff6460d989071c28426f9b882,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721418020526868452,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f08d7fe60cc6a65a21ef74a31a923766,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c630159ccd425562f552331464be0b4217958f3a1a9cc41403dcd4ea6840e2,PodSandboxId:38c2e5a5bf35048fc50640c4dd17109cbbde7ae04179abb0cc36112e9f94032a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721418020467174975,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb5f2073312f2003bff042479372294c,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49caf99a342cff7f9ed1c959f09529dd2f983f2268ec2b5947380d81ed812e95,PodSandboxId:f0e74b3e6aae6abed27d6900acb8d2f63d924020bd6fae3d87b118536bd8941f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721417734430001969,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f08d7fe60cc6a65a21ef74a31a923766,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ec0c8702-3cde-4b1d-8a9c-913f3eca80b7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:54:57 no-preload-971041 crio[727]: time="2024-07-19 19:54:57.320598203Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ee595646-2b67-49d5-8348-1f1f6abd1e7e name=/runtime.v1.RuntimeService/Version
	Jul 19 19:54:57 no-preload-971041 crio[727]: time="2024-07-19 19:54:57.320673407Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ee595646-2b67-49d5-8348-1f1f6abd1e7e name=/runtime.v1.RuntimeService/Version
	Jul 19 19:54:57 no-preload-971041 crio[727]: time="2024-07-19 19:54:57.322812354Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9821ee03-0ec6-4e1a-8eec-9276fc1c39d6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:54:57 no-preload-971041 crio[727]: time="2024-07-19 19:54:57.323181751Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721418897323160922,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9821ee03-0ec6-4e1a-8eec-9276fc1c39d6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:54:57 no-preload-971041 crio[727]: time="2024-07-19 19:54:57.323611645Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d2e72967-b44f-4226-959a-1622eef197bd name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:54:57 no-preload-971041 crio[727]: time="2024-07-19 19:54:57.323663972Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d2e72967-b44f-4226-959a-1622eef197bd name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:54:57 no-preload-971041 crio[727]: time="2024-07-19 19:54:57.323934212Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4026f272ae2afbd759a2eead22c7ba19769bf5996b34bc2a300786ea12071bf,PodSandboxId:09dac1725945b48e94edb63c465b1519ae04965dc88460932fc9f1b97dec7794,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721418032997486470,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fedb9d1a-139b-4627-8efa-64dd1481d8d4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bd028fbdc83b77e87fff78ccc73a5bbe9f449056bce3b02a01f07266aa177cd,PodSandboxId:b543f5028c7b46b286a2fcc1827b77f412d70c8a609dd9af447f9e3fbac92d11,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721418032909002159,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-7kdqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b0eeec5-f993-4d90-a3e0-5254bcc39f64,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:320e4f1c648899d65709fe2b59481bf622dac880587ad684553a7dfb62278509,PodSandboxId:112d13271d54bf2abe4d3b208c3c0bd17d9af1237ee2e3ca26d44d9648e711f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721418032760510232,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-d5xr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f
5a96c3-84ef-429c-b31d-472fcc247862,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a143b1d70f2200d56bbf9db465436c00dc0c280f563a16bda424a5482a45e9b5,PodSandboxId:ec119dcaf5f52dd347a19075df89aab7fa8c99567508a8b165d90bbf0233ce7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721418031411794224,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-445wh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 468d052a-a8d2-47f6-9054-813dd3ee33ee,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d1093d162109f27dc8b8cdbda430ea0f0089db5cdc9582fbba6c9f62db6ebe0,PodSandboxId:27241a85a80e07d726d2e27488ff9698351c76e51cd6554bcca010f09b41c300,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721418020630468660,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 204ea3c3d8c5129c562f4a760458c051,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:007fed8712ef2f9d32924f70d3f28121bc2d8aaaf3f7a1073545252fe4aaab89,PodSandboxId:33f1d8ea18a503178d8c13dd23af9b4b5c4ce1846556a6bdca7716c512ca13ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721418020537823130,Labels:map[string]string{io.kubernetes.container.name
: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb080b6fd04c15f65640b834bf6dffcd,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4bc3716261bce674e7f77a1d8e7872ff35c8557bed7653afde8c882698ffd8,PodSandboxId:010a590647984114e3de579b710f60bc66df7d6ff6460d989071c28426f9b882,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721418020526868452,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f08d7fe60cc6a65a21ef74a31a923766,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c630159ccd425562f552331464be0b4217958f3a1a9cc41403dcd4ea6840e2,PodSandboxId:38c2e5a5bf35048fc50640c4dd17109cbbde7ae04179abb0cc36112e9f94032a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721418020467174975,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb5f2073312f2003bff042479372294c,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49caf99a342cff7f9ed1c959f09529dd2f983f2268ec2b5947380d81ed812e95,PodSandboxId:f0e74b3e6aae6abed27d6900acb8d2f63d924020bd6fae3d87b118536bd8941f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721417734430001969,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-971041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f08d7fe60cc6a65a21ef74a31a923766,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d2e72967-b44f-4226-959a-1622eef197bd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d4026f272ae2a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   09dac1725945b       storage-provisioner
	9bd028fbdc83b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   b543f5028c7b4       coredns-5cfdc65f69-7kdqd
	320e4f1c64889       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   112d13271d54b       coredns-5cfdc65f69-d5xr6
	a143b1d70f220       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   14 minutes ago      Running             kube-proxy                0                   ec119dcaf5f52       kube-proxy-445wh
	6d1093d162109       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   14 minutes ago      Running             etcd                      2                   27241a85a80e0       etcd-no-preload-971041
	007fed8712ef2       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   14 minutes ago      Running             kube-scheduler            2                   33f1d8ea18a50       kube-scheduler-no-preload-971041
	7b4bc3716261b       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   14 minutes ago      Running             kube-apiserver            2                   010a590647984       kube-apiserver-no-preload-971041
	f2c630159ccd4       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   14 minutes ago      Running             kube-controller-manager   2                   38c2e5a5bf350       kube-controller-manager-no-preload-971041
	49caf99a342cf       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   19 minutes ago      Exited              kube-apiserver            1                   f0e74b3e6aae6       kube-apiserver-no-preload-971041
	
	
	==> coredns [320e4f1c648899d65709fe2b59481bf622dac880587ad684553a7dfb62278509] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [9bd028fbdc83b77e87fff78ccc73a5bbe9f449056bce3b02a01f07266aa177cd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-971041
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-971041
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221
	                    minikube.k8s.io/name=no-preload-971041
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T19_40_26_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 19:40:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-971041
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 19:54:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 19:50:49 +0000   Fri, 19 Jul 2024 19:40:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 19:50:49 +0000   Fri, 19 Jul 2024 19:40:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 19:50:49 +0000   Fri, 19 Jul 2024 19:40:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 19:50:49 +0000   Fri, 19 Jul 2024 19:40:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.119
	  Hostname:    no-preload-971041
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6b4cbd261858437eabb71e1d91bdb15f
	  System UUID:                6b4cbd26-1858-437e-abb7-1e1d91bdb15f
	  Boot ID:                    13dfdfb1-c68e-4be1-bf8e-2dde7420e709
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-7kdqd                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-5cfdc65f69-d5xr6                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-no-preload-971041                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-no-preload-971041             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-no-preload-971041    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-445wh                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-no-preload-971041             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-78fcd8795b-r6hdb              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node no-preload-971041 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node no-preload-971041 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node no-preload-971041 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m   node-controller  Node no-preload-971041 event: Registered Node no-preload-971041 in Controller
	
	
	==> dmesg <==
	[  +0.043904] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jul19 19:35] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.838577] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.531345] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.474200] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.063994] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060522] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.185227] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.145427] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[  +0.289259] systemd-fstab-generator[710]: Ignoring "noauto" option for root device
	[ +14.713791] systemd-fstab-generator[1173]: Ignoring "noauto" option for root device
	[  +0.056477] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.943248] systemd-fstab-generator[1295]: Ignoring "noauto" option for root device
	[  +3.399678] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.013528] kauditd_printk_skb: 79 callbacks suppressed
	[Jul19 19:36] kauditd_printk_skb: 8 callbacks suppressed
	[Jul19 19:40] kauditd_printk_skb: 6 callbacks suppressed
	[  +1.335736] systemd-fstab-generator[2952]: Ignoring "noauto" option for root device
	[  +4.842933] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.712419] systemd-fstab-generator[3265]: Ignoring "noauto" option for root device
	[  +5.688611] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.205528] systemd-fstab-generator[3446]: Ignoring "noauto" option for root device
	[  +7.324737] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [6d1093d162109f27dc8b8cdbda430ea0f0089db5cdc9582fbba6c9f62db6ebe0] <==
	{"level":"info","ts":"2024-07-19T19:40:20.909539Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.50.119:2380"}
	{"level":"info","ts":"2024-07-19T19:40:20.909572Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.50.119:2380"}
	{"level":"info","ts":"2024-07-19T19:40:21.547832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dbf7131a3d9cd4a2 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-19T19:40:21.547939Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dbf7131a3d9cd4a2 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-19T19:40:21.548016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dbf7131a3d9cd4a2 received MsgPreVoteResp from dbf7131a3d9cd4a2 at term 1"}
	{"level":"info","ts":"2024-07-19T19:40:21.548073Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dbf7131a3d9cd4a2 became candidate at term 2"}
	{"level":"info","ts":"2024-07-19T19:40:21.548097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dbf7131a3d9cd4a2 received MsgVoteResp from dbf7131a3d9cd4a2 at term 2"}
	{"level":"info","ts":"2024-07-19T19:40:21.548147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dbf7131a3d9cd4a2 became leader at term 2"}
	{"level":"info","ts":"2024-07-19T19:40:21.548187Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dbf7131a3d9cd4a2 elected leader dbf7131a3d9cd4a2 at term 2"}
	{"level":"info","ts":"2024-07-19T19:40:21.553073Z","caller":"etcdserver/server.go:2628","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T19:40:21.555197Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T19:40:21.556192Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-19T19:40:21.559807Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T19:40:21.559847Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-19T19:40:21.559019Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"753d5ab11f342e1f","local-member-id":"dbf7131a3d9cd4a2","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T19:40:21.566819Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T19:40:21.566892Z","caller":"etcdserver/server.go:2652","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T19:40:21.559048Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T19:40:21.559089Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"dbf7131a3d9cd4a2","local-member-attributes":"{Name:no-preload-971041 ClientURLs:[https://192.168.50.119:2379]}","request-path":"/0/members/dbf7131a3d9cd4a2/attributes","cluster-id":"753d5ab11f342e1f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-19T19:40:21.567788Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-19T19:40:21.568583Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.119:2379"}
	{"level":"info","ts":"2024-07-19T19:40:21.570833Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-19T19:50:21.597194Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":684}
	{"level":"info","ts":"2024-07-19T19:50:21.606636Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":684,"took":"8.842784ms","hash":1873184541,"current-db-size-bytes":2244608,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2244608,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-07-19T19:50:21.606794Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1873184541,"revision":684,"compact-revision":-1}
	
	
	==> kernel <==
	 19:54:57 up 19 min,  0 users,  load average: 0.20, 0.31, 0.27
	Linux no-preload-971041 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [49caf99a342cff7f9ed1c959f09529dd2f983f2268ec2b5947380d81ed812e95] <==
	W0719 19:40:14.471566       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:14.471967       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:14.473458       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:14.485333       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:14.517829       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:14.538567       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:14.563321       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:14.604320       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:14.633280       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:14.707558       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:14.716823       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:14.809960       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:14.838244       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:14.867439       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:14.871997       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:14.880450       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:14.939039       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:14.995399       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:15.057127       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:15.127202       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:15.145617       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:15.149383       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:15.156971       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:15.259503       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:40:15.321313       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [7b4bc3716261bce674e7f77a1d8e7872ff35c8557bed7653afde8c882698ffd8] <==
	W0719 19:50:24.282765       1 handler_proxy.go:99] no RequestInfo found in the context
	E0719 19:50:24.282947       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0719 19:50:24.283932       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0719 19:50:24.284000       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 19:51:24.284897       1 handler_proxy.go:99] no RequestInfo found in the context
	E0719 19:51:24.284982       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0719 19:51:24.285159       1 handler_proxy.go:99] no RequestInfo found in the context
	E0719 19:51:24.285219       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0719 19:51:24.286141       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0719 19:51:24.286316       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 19:53:24.286964       1 handler_proxy.go:99] no RequestInfo found in the context
	W0719 19:53:24.286981       1 handler_proxy.go:99] no RequestInfo found in the context
	E0719 19:53:24.287180       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0719 19:53:24.287081       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0719 19:53:24.288442       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0719 19:53:24.288519       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [f2c630159ccd425562f552331464be0b4217958f3a1a9cc41403dcd4ea6840e2] <==
	I0719 19:49:31.296670       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:49:31.300419       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 19:50:01.306412       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:50:01.307074       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	E0719 19:50:31.314592       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 19:50:31.316178       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0719 19:50:49.778347       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-971041"
	E0719 19:51:01.321396       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 19:51:01.323955       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0719 19:51:31.053227       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="170.114µs"
	E0719 19:51:31.335124       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 19:51:31.337556       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0719 19:51:42.049600       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="75.114µs"
	E0719 19:52:01.344554       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 19:52:01.348610       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:52:31.354952       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 19:52:31.358310       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:53:01.361461       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 19:53:01.365359       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:53:31.369030       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 19:53:31.376045       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:54:01.377167       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 19:54:01.383931       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 19:54:31.383943       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 19:54:31.391168       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [a143b1d70f2200d56bbf9db465436c00dc0c280f563a16bda424a5482a45e9b5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0719 19:40:31.687563       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0719 19:40:31.705669       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.50.119"]
	E0719 19:40:31.705986       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0719 19:40:31.746525       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0719 19:40:31.746607       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 19:40:31.746650       1 server_linux.go:170] "Using iptables Proxier"
	I0719 19:40:31.749078       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0719 19:40:31.749330       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0719 19:40:31.749526       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 19:40:31.752264       1 config.go:197] "Starting service config controller"
	I0719 19:40:31.752296       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 19:40:31.752321       1 config.go:104] "Starting endpoint slice config controller"
	I0719 19:40:31.752331       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 19:40:31.753631       1 config.go:326] "Starting node config controller"
	I0719 19:40:31.753657       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 19:40:31.853096       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 19:40:31.853259       1 shared_informer.go:320] Caches are synced for service config
	I0719 19:40:31.853783       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [007fed8712ef2f9d32924f70d3f28121bc2d8aaaf3f7a1073545252fe4aaab89] <==
	W0719 19:40:23.352447       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0719 19:40:23.352514       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0719 19:40:23.352554       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0719 19:40:23.352657       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 19:40:23.355037       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0719 19:40:23.354866       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0719 19:40:23.355077       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0719 19:40:23.354944       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 19:40:23.355149       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0719 19:40:23.352522       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0719 19:40:24.158752       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 19:40:24.158867       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0719 19:40:24.234501       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0719 19:40:24.234550       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0719 19:40:24.355940       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 19:40:24.355991       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0719 19:40:24.518926       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 19:40:24.519045       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0719 19:40:24.533178       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0719 19:40:24.533239       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0719 19:40:24.541573       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 19:40:24.541677       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0719 19:40:24.714083       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 19:40:24.714349       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0719 19:40:27.617897       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 19 19:52:26 no-preload-971041 kubelet[3272]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 19:52:26 no-preload-971041 kubelet[3272]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 19:52:26 no-preload-971041 kubelet[3272]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 19:52:38 no-preload-971041 kubelet[3272]: E0719 19:52:38.036082    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-r6hdb" podUID="13d26205-03d4-4719-bfa0-199a2e23b52a"
	Jul 19 19:52:49 no-preload-971041 kubelet[3272]: E0719 19:52:49.033200    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-r6hdb" podUID="13d26205-03d4-4719-bfa0-199a2e23b52a"
	Jul 19 19:53:01 no-preload-971041 kubelet[3272]: E0719 19:53:01.032633    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-r6hdb" podUID="13d26205-03d4-4719-bfa0-199a2e23b52a"
	Jul 19 19:53:13 no-preload-971041 kubelet[3272]: E0719 19:53:13.032660    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-r6hdb" podUID="13d26205-03d4-4719-bfa0-199a2e23b52a"
	Jul 19 19:53:24 no-preload-971041 kubelet[3272]: E0719 19:53:24.033347    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-r6hdb" podUID="13d26205-03d4-4719-bfa0-199a2e23b52a"
	Jul 19 19:53:26 no-preload-971041 kubelet[3272]: E0719 19:53:26.067859    3272 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 19:53:26 no-preload-971041 kubelet[3272]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 19:53:26 no-preload-971041 kubelet[3272]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 19:53:26 no-preload-971041 kubelet[3272]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 19:53:26 no-preload-971041 kubelet[3272]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 19:53:37 no-preload-971041 kubelet[3272]: E0719 19:53:37.033307    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-r6hdb" podUID="13d26205-03d4-4719-bfa0-199a2e23b52a"
	Jul 19 19:53:52 no-preload-971041 kubelet[3272]: E0719 19:53:52.034955    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-r6hdb" podUID="13d26205-03d4-4719-bfa0-199a2e23b52a"
	Jul 19 19:54:05 no-preload-971041 kubelet[3272]: E0719 19:54:05.034388    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-r6hdb" podUID="13d26205-03d4-4719-bfa0-199a2e23b52a"
	Jul 19 19:54:19 no-preload-971041 kubelet[3272]: E0719 19:54:19.033203    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-r6hdb" podUID="13d26205-03d4-4719-bfa0-199a2e23b52a"
	Jul 19 19:54:26 no-preload-971041 kubelet[3272]: E0719 19:54:26.071673    3272 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 19:54:26 no-preload-971041 kubelet[3272]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 19:54:26 no-preload-971041 kubelet[3272]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 19:54:26 no-preload-971041 kubelet[3272]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 19:54:26 no-preload-971041 kubelet[3272]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 19:54:33 no-preload-971041 kubelet[3272]: E0719 19:54:33.033346    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-r6hdb" podUID="13d26205-03d4-4719-bfa0-199a2e23b52a"
	Jul 19 19:54:46 no-preload-971041 kubelet[3272]: E0719 19:54:46.033251    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-r6hdb" podUID="13d26205-03d4-4719-bfa0-199a2e23b52a"
	Jul 19 19:54:57 no-preload-971041 kubelet[3272]: E0719 19:54:57.032913    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-r6hdb" podUID="13d26205-03d4-4719-bfa0-199a2e23b52a"
	
	
	==> storage-provisioner [d4026f272ae2afbd759a2eead22c7ba19769bf5996b34bc2a300786ea12071bf] <==
	I0719 19:40:33.345865       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 19:40:33.371192       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 19:40:33.371330       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0719 19:40:33.387320       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0719 19:40:33.387921       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-971041_ced764da-befb-41f0-a52c-e1ef0fbb6e8f!
	I0719 19:40:33.387959       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6dd3d92e-4654-477a-8925-bc8a8939fedb", APIVersion:"v1", ResourceVersion:"398", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-971041_ced764da-befb-41f0-a52c-e1ef0fbb6e8f became leader
	I0719 19:40:33.488778       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-971041_ced764da-befb-41f0-a52c-e1ef0fbb6e8f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-971041 -n no-preload-971041
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-971041 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-78fcd8795b-r6hdb
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-971041 describe pod metrics-server-78fcd8795b-r6hdb
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-971041 describe pod metrics-server-78fcd8795b-r6hdb: exit status 1 (67.061388ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-78fcd8795b-r6hdb" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-971041 describe pod metrics-server-78fcd8795b-r6hdb: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (312.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (157.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
E0719 19:52:32.426966   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
E0719 19:54:43.524856   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/functional-788436/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.220:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.220:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-570369 -n old-k8s-version-570369
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-570369 -n old-k8s-version-570369: exit status 2 (232.631552ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-570369" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-570369 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-570369 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.835µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-570369 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-570369 -n old-k8s-version-570369
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-570369 -n old-k8s-version-570369: exit status 2 (228.436883ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-570369 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-570369 logs -n 25: (1.542053071s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p kubernetes-upgrade-821162                           | kubernetes-upgrade-821162    | jenkins | v1.33.1 | 19 Jul 24 19:25 UTC | 19 Jul 24 19:25 UTC |
	| start   | -p embed-certs-889918                                  | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:25 UTC | 19 Jul 24 19:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| pause   | -p pause-666146                                        | pause-666146                 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| unpause | -p pause-666146                                        | pause-666146                 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| pause   | -p pause-666146                                        | pause-666146                 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-666146                                        | pause-666146                 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-666146                                        | pause-666146                 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	| delete  | -p                                                     | disable-driver-mounts-994300 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | disable-driver-mounts-994300                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:27 UTC |
	|         | default-k8s-diff-port-578533                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-971041             | no-preload-971041            | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC | 19 Jul 24 19:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-971041                                   | no-preload-971041            | jenkins | v1.33.1 | 19 Jul 24 19:26 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-578533  | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:27 UTC | 19 Jul 24 19:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:27 UTC |                     |
	|         | default-k8s-diff-port-578533                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-889918            | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:27 UTC | 19 Jul 24 19:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-889918                                  | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-971041                  | no-preload-971041            | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-971041 --memory=2200                     | no-preload-971041            | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC | 19 Jul 24 19:40 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-570369        | old-k8s-version-570369       | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-578533       | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-889918                 | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-578533 | jenkins | v1.33.1 | 19 Jul 24 19:30 UTC | 19 Jul 24 19:40 UTC |
	|         | default-k8s-diff-port-578533                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-889918                                  | embed-certs-889918           | jenkins | v1.33.1 | 19 Jul 24 19:30 UTC | 19 Jul 24 19:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-570369                              | old-k8s-version-570369       | jenkins | v1.33.1 | 19 Jul 24 19:30 UTC | 19 Jul 24 19:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-570369             | old-k8s-version-570369       | jenkins | v1.33.1 | 19 Jul 24 19:31 UTC | 19 Jul 24 19:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-570369                              | old-k8s-version-570369       | jenkins | v1.33.1 | 19 Jul 24 19:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 19:31:02
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 19:31:02.001058   68995 out.go:291] Setting OutFile to fd 1 ...
	I0719 19:31:02.001478   68995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:31:02.001493   68995 out.go:304] Setting ErrFile to fd 2...
	I0719 19:31:02.001500   68995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:31:02.001955   68995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 19:31:02.002951   68995 out.go:298] Setting JSON to false
	I0719 19:31:02.003871   68995 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8005,"bootTime":1721409457,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 19:31:02.003932   68995 start.go:139] virtualization: kvm guest
	I0719 19:31:02.005683   68995 out.go:177] * [old-k8s-version-570369] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 19:31:02.007426   68995 notify.go:220] Checking for updates...
	I0719 19:31:02.007440   68995 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 19:31:02.008887   68995 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 19:31:02.010101   68995 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:31:02.011382   68995 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 19:31:02.012573   68995 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 19:31:02.013796   68995 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 19:31:02.015353   68995 config.go:182] Loaded profile config "old-k8s-version-570369": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0719 19:31:02.015711   68995 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:31:02.015750   68995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:31:02.030334   68995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42673
	I0719 19:31:02.030703   68995 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:31:02.031167   68995 main.go:141] libmachine: Using API Version  1
	I0719 19:31:02.031204   68995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:31:02.031520   68995 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:31:02.031688   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:31:02.033460   68995 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0719 19:31:02.034684   68995 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 19:31:02.034983   68995 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:31:02.035018   68995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:31:02.049101   68995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44905
	I0719 19:31:02.049495   68995 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:31:02.049992   68995 main.go:141] libmachine: Using API Version  1
	I0719 19:31:02.050006   68995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:31:02.050295   68995 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:31:02.050462   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:31:02.085393   68995 out.go:177] * Using the kvm2 driver based on existing profile
	I0719 19:31:02.086642   68995 start.go:297] selected driver: kvm2
	I0719 19:31:02.086658   68995 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-570369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:31:02.086766   68995 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 19:31:02.087430   68995 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 19:31:02.087497   68995 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19307-14841/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 19:31:02.102357   68995 install.go:137] /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0719 19:31:02.102731   68995 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 19:31:02.102797   68995 cni.go:84] Creating CNI manager for ""
	I0719 19:31:02.102814   68995 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:31:02.102857   68995 start.go:340] cluster config:
	{Name:old-k8s-version-570369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570369 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:31:02.102975   68995 iso.go:125] acquiring lock: {Name:mka126a3710ab8a115eb2a3299b735524430e5f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 19:31:02.104839   68995 out.go:177] * Starting "old-k8s-version-570369" primary control-plane node in "old-k8s-version-570369" cluster
	I0719 19:30:58.496088   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:02.106080   68995 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 19:31:02.106120   68995 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0719 19:31:02.106127   68995 cache.go:56] Caching tarball of preloaded images
	I0719 19:31:02.106200   68995 preload.go:172] Found /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 19:31:02.106217   68995 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0719 19:31:02.106343   68995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/config.json ...
	I0719 19:31:02.106543   68995 start.go:360] acquireMachinesLock for old-k8s-version-570369: {Name:mk45aeee801587237bb965fa260501e9fac6f233 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 19:31:04.576109   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:07.648198   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:13.728139   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:16.800162   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:22.880170   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:25.952158   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:32.032094   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:35.104127   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:41.184155   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:44.256157   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:50.336135   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:53.408126   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:31:59.488236   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:02.560149   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:08.640107   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:11.712145   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:17.792126   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:20.864127   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:26.944130   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:30.016118   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:36.096094   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:39.168160   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:45.248117   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:48.320210   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:54.400118   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:32:57.472159   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:03.552119   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:06.624189   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:12.704169   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:15.776085   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:21.856121   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:24.928169   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:31.008143   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:34.080169   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:40.160126   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:43.232131   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:49.312133   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:52.384155   68019 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.119:22: connect: no route to host
	I0719 19:33:55.388766   68536 start.go:364] duration metric: took 3m51.88250847s to acquireMachinesLock for "default-k8s-diff-port-578533"
	I0719 19:33:55.388818   68536 start.go:96] Skipping create...Using existing machine configuration
	I0719 19:33:55.388827   68536 fix.go:54] fixHost starting: 
	I0719 19:33:55.389286   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:33:55.389320   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:33:55.405253   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44877
	I0719 19:33:55.405681   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:33:55.406171   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:33:55.406199   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:33:55.406526   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:33:55.406725   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:33:55.406877   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetState
	I0719 19:33:55.408620   68536 fix.go:112] recreateIfNeeded on default-k8s-diff-port-578533: state=Stopped err=<nil>
	I0719 19:33:55.408664   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	W0719 19:33:55.408859   68536 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 19:33:55.410653   68536 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-578533" ...
	I0719 19:33:55.386190   68019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:33:55.386229   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetMachineName
	I0719 19:33:55.386570   68019 buildroot.go:166] provisioning hostname "no-preload-971041"
	I0719 19:33:55.386604   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetMachineName
	I0719 19:33:55.386798   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:33:55.388625   68019 machine.go:97] duration metric: took 4m37.402512771s to provisionDockerMachine
	I0719 19:33:55.388679   68019 fix.go:56] duration metric: took 4m37.424232342s for fixHost
	I0719 19:33:55.388689   68019 start.go:83] releasing machines lock for "no-preload-971041", held for 4m37.424257446s
	W0719 19:33:55.388716   68019 start.go:714] error starting host: provision: host is not running
	W0719 19:33:55.388829   68019 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0719 19:33:55.388840   68019 start.go:729] Will try again in 5 seconds ...
	I0719 19:33:55.412077   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Start
	I0719 19:33:55.412257   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Ensuring networks are active...
	I0719 19:33:55.413065   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Ensuring network default is active
	I0719 19:33:55.413504   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Ensuring network mk-default-k8s-diff-port-578533 is active
	I0719 19:33:55.413931   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Getting domain xml...
	I0719 19:33:55.414619   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Creating domain...
	I0719 19:33:56.637676   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting to get IP...
	I0719 19:33:56.638545   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:56.638912   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:56.638971   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:56.638901   69604 retry.go:31] will retry after 210.865266ms: waiting for machine to come up
	I0719 19:33:56.851496   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:56.851959   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:56.851986   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:56.851920   69604 retry.go:31] will retry after 336.006729ms: waiting for machine to come up
	I0719 19:33:57.189571   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:57.189916   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:57.189945   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:57.189885   69604 retry.go:31] will retry after 325.384423ms: waiting for machine to come up
	I0719 19:33:57.516504   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:57.517043   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:57.517072   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:57.516989   69604 retry.go:31] will retry after 438.127108ms: waiting for machine to come up
	I0719 19:33:57.956641   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:57.957091   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:57.957110   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:57.957066   69604 retry.go:31] will retry after 656.518154ms: waiting for machine to come up
	I0719 19:34:00.391238   68019 start.go:360] acquireMachinesLock for no-preload-971041: {Name:mk45aeee801587237bb965fa260501e9fac6f233 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 19:33:58.614988   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:58.615352   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:58.615398   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:58.615292   69604 retry.go:31] will retry after 854.967551ms: waiting for machine to come up
	I0719 19:33:59.471470   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:33:59.471922   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:33:59.471950   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:33:59.471872   69604 retry.go:31] will retry after 1.032891444s: waiting for machine to come up
	I0719 19:34:00.506554   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:00.507020   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:34:00.507051   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:34:00.506976   69604 retry.go:31] will retry after 1.30559154s: waiting for machine to come up
	I0719 19:34:01.814516   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:01.814877   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:34:01.814903   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:34:01.814817   69604 retry.go:31] will retry after 1.152581751s: waiting for machine to come up
	I0719 19:34:02.968913   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:02.969375   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:34:02.969399   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:34:02.969317   69604 retry.go:31] will retry after 1.415187183s: waiting for machine to come up
	I0719 19:34:04.386365   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:04.386906   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:34:04.386937   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:34:04.386839   69604 retry.go:31] will retry after 2.854689482s: waiting for machine to come up
	I0719 19:34:07.243500   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:07.244001   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:34:07.244031   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:34:07.243865   69604 retry.go:31] will retry after 2.536982272s: waiting for machine to come up
	I0719 19:34:09.783749   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:09.784224   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | unable to find current IP address of domain default-k8s-diff-port-578533 in network mk-default-k8s-diff-port-578533
	I0719 19:34:09.784256   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | I0719 19:34:09.784163   69604 retry.go:31] will retry after 3.046898328s: waiting for machine to come up
	I0719 19:34:12.833907   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.834440   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Found IP for machine: 192.168.72.104
	I0719 19:34:12.834468   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Reserving static IP address...
	I0719 19:34:12.834484   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has current primary IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.835035   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-578533", mac: "52:54:00:85:b3:02", ip: "192.168.72.104"} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:12.835064   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | skip adding static IP to network mk-default-k8s-diff-port-578533 - found existing host DHCP lease matching {name: "default-k8s-diff-port-578533", mac: "52:54:00:85:b3:02", ip: "192.168.72.104"}
	I0719 19:34:12.835078   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Reserved static IP address: 192.168.72.104
	I0719 19:34:12.835094   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Waiting for SSH to be available...
	I0719 19:34:12.835109   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Getting to WaitForSSH function...
	I0719 19:34:12.837426   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.837740   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:12.837766   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.837864   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Using SSH client type: external
	I0719 19:34:12.837885   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Using SSH private key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa (-rw-------)
	I0719 19:34:12.837916   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 19:34:12.837930   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | About to run SSH command:
	I0719 19:34:12.837946   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | exit 0
	I0719 19:34:12.959728   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | SSH cmd err, output: <nil>: 
	I0719 19:34:12.960118   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetConfigRaw
	I0719 19:34:12.960737   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetIP
	I0719 19:34:12.963250   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.963645   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:12.963675   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.963868   68536 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/config.json ...
	I0719 19:34:12.964074   68536 machine.go:94] provisionDockerMachine start ...
	I0719 19:34:12.964092   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:34:12.964326   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:12.966667   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.966982   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:12.967017   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:12.967148   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:12.967318   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:12.967567   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:12.967727   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:12.967907   68536 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:12.968155   68536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0719 19:34:12.968169   68536 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 19:34:13.063641   68536 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 19:34:13.063685   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetMachineName
	I0719 19:34:13.064015   68536 buildroot.go:166] provisioning hostname "default-k8s-diff-port-578533"
	I0719 19:34:13.064045   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetMachineName
	I0719 19:34:13.064242   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.067158   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.067658   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.067783   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.067785   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.067994   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.068150   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.068309   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.068438   68536 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:13.068638   68536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0719 19:34:13.068653   68536 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-578533 && echo "default-k8s-diff-port-578533" | sudo tee /etc/hostname
	I0719 19:34:13.182279   68536 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-578533
	
	I0719 19:34:13.182304   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.185081   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.185417   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.185440   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.185585   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.185761   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.185925   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.186102   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.186237   68536 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:13.186400   68536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0719 19:34:13.186415   68536 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-578533' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-578533/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-578533' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 19:34:13.292203   68536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:34:13.292246   68536 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 19:34:13.292276   68536 buildroot.go:174] setting up certificates
	I0719 19:34:13.292285   68536 provision.go:84] configureAuth start
	I0719 19:34:13.292315   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetMachineName
	I0719 19:34:13.292630   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetIP
	I0719 19:34:13.295150   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.295466   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.295494   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.295628   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.297608   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.297896   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.297924   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.298039   68536 provision.go:143] copyHostCerts
	I0719 19:34:13.298129   68536 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 19:34:13.298148   68536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 19:34:13.298233   68536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 19:34:13.298352   68536 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 19:34:13.298364   68536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 19:34:13.298405   68536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 19:34:13.298491   68536 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 19:34:13.298503   68536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 19:34:13.298530   68536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 19:34:13.298587   68536 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-578533 san=[127.0.0.1 192.168.72.104 default-k8s-diff-port-578533 localhost minikube]
	I0719 19:34:13.367868   68536 provision.go:177] copyRemoteCerts
	I0719 19:34:13.367923   68536 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 19:34:13.367947   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.370508   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.370761   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.370783   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.370975   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.371154   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.371331   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.371456   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:34:13.984609   68617 start.go:364] duration metric: took 4m7.413784663s to acquireMachinesLock for "embed-certs-889918"
	I0719 19:34:13.984726   68617 start.go:96] Skipping create...Using existing machine configuration
	I0719 19:34:13.984746   68617 fix.go:54] fixHost starting: 
	I0719 19:34:13.985263   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:13.985332   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:14.001938   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42041
	I0719 19:34:14.002372   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:14.002864   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:14.002890   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:14.003223   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:14.003400   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:14.003545   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetState
	I0719 19:34:14.005102   68617 fix.go:112] recreateIfNeeded on embed-certs-889918: state=Stopped err=<nil>
	I0719 19:34:14.005141   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	W0719 19:34:14.005320   68617 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 19:34:14.007389   68617 out.go:177] * Restarting existing kvm2 VM for "embed-certs-889918" ...
	I0719 19:34:13.449436   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 19:34:13.472085   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0719 19:34:13.493354   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 19:34:13.515145   68536 provision.go:87] duration metric: took 222.847636ms to configureAuth
	I0719 19:34:13.515179   68536 buildroot.go:189] setting minikube options for container-runtime
	I0719 19:34:13.515350   68536 config.go:182] Loaded profile config "default-k8s-diff-port-578533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:34:13.515430   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.517974   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.518417   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.518449   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.518608   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.518818   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.519077   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.519266   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.519452   68536 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:13.519605   68536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0719 19:34:13.519619   68536 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 19:34:13.764107   68536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 19:34:13.764143   68536 machine.go:97] duration metric: took 800.055026ms to provisionDockerMachine
	I0719 19:34:13.764155   68536 start.go:293] postStartSetup for "default-k8s-diff-port-578533" (driver="kvm2")
	I0719 19:34:13.764165   68536 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 19:34:13.764182   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:34:13.764505   68536 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 19:34:13.764535   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.767239   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.767669   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.767692   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.767870   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.768050   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.768210   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.768356   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:34:13.846203   68536 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 19:34:13.850376   68536 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 19:34:13.850403   68536 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 19:34:13.850473   68536 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 19:34:13.850562   68536 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 19:34:13.850672   68536 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 19:34:13.859405   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:34:13.880565   68536 start.go:296] duration metric: took 116.397021ms for postStartSetup
	I0719 19:34:13.880605   68536 fix.go:56] duration metric: took 18.491777449s for fixHost
	I0719 19:34:13.880628   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.883335   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.883686   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.883716   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.883906   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.884071   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.884245   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.884390   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.884526   68536 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:13.884730   68536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0719 19:34:13.884743   68536 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 19:34:13.984426   68536 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721417653.961477719
	
	I0719 19:34:13.984454   68536 fix.go:216] guest clock: 1721417653.961477719
	I0719 19:34:13.984464   68536 fix.go:229] Guest: 2024-07-19 19:34:13.961477719 +0000 UTC Remote: 2024-07-19 19:34:13.88060979 +0000 UTC m=+250.513369231 (delta=80.867929ms)
	I0719 19:34:13.984498   68536 fix.go:200] guest clock delta is within tolerance: 80.867929ms
	I0719 19:34:13.984503   68536 start.go:83] releasing machines lock for "default-k8s-diff-port-578533", held for 18.59570944s
	I0719 19:34:13.984529   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:34:13.984822   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetIP
	I0719 19:34:13.987630   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.988146   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.988165   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.988322   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:34:13.988760   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:34:13.988961   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:34:13.989046   68536 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 19:34:13.989094   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.989197   68536 ssh_runner.go:195] Run: cat /version.json
	I0719 19:34:13.989218   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:34:13.991820   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.992135   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.992285   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.992329   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.992419   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.992558   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:13.992584   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:13.992608   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.992695   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:34:13.992798   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.992815   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:34:13.992957   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:34:13.992978   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:34:13.993119   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:34:14.102452   68536 ssh_runner.go:195] Run: systemctl --version
	I0719 19:34:14.108454   68536 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 19:34:14.256762   68536 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 19:34:14.263615   68536 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 19:34:14.263685   68536 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 19:34:14.280375   68536 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 19:34:14.280396   68536 start.go:495] detecting cgroup driver to use...
	I0719 19:34:14.280465   68536 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 19:34:14.297524   68536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 19:34:14.312273   68536 docker.go:217] disabling cri-docker service (if available) ...
	I0719 19:34:14.312334   68536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 19:34:14.326334   68536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 19:34:14.341338   68536 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 19:34:14.465631   68536 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 19:34:14.603249   68536 docker.go:233] disabling docker service ...
	I0719 19:34:14.603333   68536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 19:34:14.617420   68536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 19:34:14.630612   68536 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 19:34:14.776325   68536 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 19:34:14.890106   68536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 19:34:14.904436   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 19:34:14.921845   68536 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 19:34:14.921921   68536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:14.932155   68536 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 19:34:14.932220   68536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:14.942999   68536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:14.955449   68536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:14.966646   68536 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 19:34:14.977147   68536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:14.988224   68536 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:15.005941   68536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:15.018341   68536 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 19:34:15.028418   68536 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 19:34:15.028502   68536 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 19:34:15.040704   68536 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 19:34:15.050734   68536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:34:15.159960   68536 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 19:34:15.323020   68536 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 19:34:15.323095   68536 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 19:34:15.327549   68536 start.go:563] Will wait 60s for crictl version
	I0719 19:34:15.327616   68536 ssh_runner.go:195] Run: which crictl
	I0719 19:34:15.330907   68536 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 19:34:15.369980   68536 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 19:34:15.370062   68536 ssh_runner.go:195] Run: crio --version
	I0719 19:34:15.399065   68536 ssh_runner.go:195] Run: crio --version
	I0719 19:34:15.426462   68536 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 19:34:14.008924   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Start
	I0719 19:34:14.009109   68617 main.go:141] libmachine: (embed-certs-889918) Ensuring networks are active...
	I0719 19:34:14.009827   68617 main.go:141] libmachine: (embed-certs-889918) Ensuring network default is active
	I0719 19:34:14.010200   68617 main.go:141] libmachine: (embed-certs-889918) Ensuring network mk-embed-certs-889918 is active
	I0719 19:34:14.010575   68617 main.go:141] libmachine: (embed-certs-889918) Getting domain xml...
	I0719 19:34:14.011168   68617 main.go:141] libmachine: (embed-certs-889918) Creating domain...
	I0719 19:34:15.280485   68617 main.go:141] libmachine: (embed-certs-889918) Waiting to get IP...
	I0719 19:34:15.281463   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:15.281945   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:15.282019   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:15.281911   69727 retry.go:31] will retry after 309.614436ms: waiting for machine to come up
	I0719 19:34:15.593592   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:15.594042   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:15.594082   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:15.593990   69727 retry.go:31] will retry after 302.723172ms: waiting for machine to come up
	I0719 19:34:15.898674   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:15.899173   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:15.899203   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:15.899125   69727 retry.go:31] will retry after 419.891108ms: waiting for machine to come up
	I0719 19:34:16.320907   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:16.321399   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:16.321444   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:16.321365   69727 retry.go:31] will retry after 528.113112ms: waiting for machine to come up
	I0719 19:34:15.427710   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetIP
	I0719 19:34:15.430887   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:15.431461   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:34:15.431493   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:34:15.431702   68536 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0719 19:34:15.435670   68536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:34:15.447909   68536 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-578533 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-578533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.104 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 19:34:15.448033   68536 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 19:34:15.448080   68536 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:34:15.483687   68536 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0719 19:34:15.483774   68536 ssh_runner.go:195] Run: which lz4
	I0719 19:34:15.487578   68536 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 19:34:15.491770   68536 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 19:34:15.491799   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0719 19:34:16.808717   68536 crio.go:462] duration metric: took 1.321185682s to copy over tarball
	I0719 19:34:16.808781   68536 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 19:34:16.850971   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:16.851465   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:16.851497   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:16.851413   69727 retry.go:31] will retry after 734.178646ms: waiting for machine to come up
	I0719 19:34:17.587364   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:17.587763   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:17.587781   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:17.587722   69727 retry.go:31] will retry after 818.269359ms: waiting for machine to come up
	I0719 19:34:18.407178   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:18.407788   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:18.407815   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:18.407741   69727 retry.go:31] will retry after 1.018825067s: waiting for machine to come up
	I0719 19:34:19.427765   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:19.428247   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:19.428277   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:19.428196   69727 retry.go:31] will retry after 1.005679734s: waiting for machine to come up
	I0719 19:34:20.435288   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:20.435661   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:20.435693   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:20.435611   69727 retry.go:31] will retry after 1.648093342s: waiting for machine to come up
	I0719 19:34:19.006661   68536 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.197857661s)
	I0719 19:34:19.006696   68536 crio.go:469] duration metric: took 2.197951458s to extract the tarball
	I0719 19:34:19.006707   68536 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 19:34:19.043352   68536 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:34:19.083293   68536 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 19:34:19.083316   68536 cache_images.go:84] Images are preloaded, skipping loading
	I0719 19:34:19.083333   68536 kubeadm.go:934] updating node { 192.168.72.104 8444 v1.30.3 crio true true} ...
	I0719 19:34:19.083489   68536 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-578533 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-578533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 19:34:19.083593   68536 ssh_runner.go:195] Run: crio config
	I0719 19:34:19.129638   68536 cni.go:84] Creating CNI manager for ""
	I0719 19:34:19.129667   68536 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:34:19.129697   68536 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 19:34:19.129730   68536 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.104 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-578533 NodeName:default-k8s-diff-port-578533 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 19:34:19.129908   68536 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.104
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-578533"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 19:34:19.129996   68536 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 19:34:19.139345   68536 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 19:34:19.139423   68536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 19:34:19.147942   68536 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0719 19:34:19.163170   68536 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 19:34:19.179000   68536 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0719 19:34:19.195196   68536 ssh_runner.go:195] Run: grep 192.168.72.104	control-plane.minikube.internal$ /etc/hosts
	I0719 19:34:19.198941   68536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:34:19.210202   68536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:34:19.326530   68536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:34:19.343318   68536 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533 for IP: 192.168.72.104
	I0719 19:34:19.343352   68536 certs.go:194] generating shared ca certs ...
	I0719 19:34:19.343371   68536 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:34:19.343537   68536 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 19:34:19.343600   68536 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 19:34:19.343614   68536 certs.go:256] generating profile certs ...
	I0719 19:34:19.343739   68536 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/client.key
	I0719 19:34:19.343808   68536 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/apiserver.key.0a2b4bda
	I0719 19:34:19.343876   68536 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/proxy-client.key
	I0719 19:34:19.344051   68536 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 19:34:19.344112   68536 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 19:34:19.344126   68536 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 19:34:19.344155   68536 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 19:34:19.344183   68536 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 19:34:19.344211   68536 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 19:34:19.344270   68536 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:34:19.345114   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 19:34:19.382756   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 19:34:19.414239   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 19:34:19.448517   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 19:34:19.491082   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0719 19:34:19.514344   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 19:34:19.546679   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 19:34:19.569407   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 19:34:19.591638   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 19:34:19.613336   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 19:34:19.635118   68536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 19:34:19.656675   68536 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 19:34:19.672292   68536 ssh_runner.go:195] Run: openssl version
	I0719 19:34:19.678232   68536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 19:34:19.688472   68536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 19:34:19.692656   68536 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 19:34:19.692714   68536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 19:34:19.698240   68536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 19:34:19.707912   68536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 19:34:19.718120   68536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:34:19.722206   68536 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:34:19.722257   68536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:34:19.727641   68536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 19:34:19.737694   68536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 19:34:19.747354   68536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 19:34:19.751343   68536 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 19:34:19.751389   68536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 19:34:19.756539   68536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 19:34:19.766256   68536 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 19:34:19.770309   68536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 19:34:19.775957   68536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 19:34:19.781384   68536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 19:34:19.786780   68536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 19:34:19.792104   68536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 19:34:19.797536   68536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 19:34:19.802918   68536 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-578533 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-578533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.104 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:34:19.803026   68536 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 19:34:19.803100   68536 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:34:19.837950   68536 cri.go:89] found id: ""
	I0719 19:34:19.838029   68536 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 19:34:19.847639   68536 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 19:34:19.847667   68536 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 19:34:19.847718   68536 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 19:34:19.856631   68536 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 19:34:19.857641   68536 kubeconfig.go:125] found "default-k8s-diff-port-578533" server: "https://192.168.72.104:8444"
	I0719 19:34:19.859821   68536 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 19:34:19.868724   68536 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.104
	I0719 19:34:19.868764   68536 kubeadm.go:1160] stopping kube-system containers ...
	I0719 19:34:19.868778   68536 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 19:34:19.868844   68536 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:34:19.907378   68536 cri.go:89] found id: ""
	I0719 19:34:19.907445   68536 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 19:34:19.923744   68536 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:34:19.933425   68536 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:34:19.933444   68536 kubeadm.go:157] found existing configuration files:
	
	I0719 19:34:19.933508   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0719 19:34:19.942406   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:34:19.942458   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:34:19.951275   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0719 19:34:19.960233   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:34:19.960296   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:34:19.969289   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0719 19:34:19.977401   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:34:19.977458   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:34:19.986298   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0719 19:34:19.994572   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:34:19.994640   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:34:20.003270   68536 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:34:20.012009   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:20.120255   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:21.398914   68536 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.278623256s)
	I0719 19:34:21.398951   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:21.594827   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:21.650132   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:21.717440   68536 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:34:21.717566   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:22.218530   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:22.718618   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:23.218370   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:22.084801   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:22.085201   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:22.085230   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:22.085166   69727 retry.go:31] will retry after 1.686559667s: waiting for machine to come up
	I0719 19:34:23.773486   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:23.774013   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:23.774039   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:23.773982   69727 retry.go:31] will retry after 2.360164076s: waiting for machine to come up
	I0719 19:34:26.136972   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:26.137506   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:26.137528   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:26.137451   69727 retry.go:31] will retry after 2.922244999s: waiting for machine to come up
	I0719 19:34:23.717755   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:24.217721   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:24.232459   68536 api_server.go:72] duration metric: took 2.515020251s to wait for apiserver process to appear ...
	I0719 19:34:24.232486   68536 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:34:24.232513   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:29.061452   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:29.062023   68617 main.go:141] libmachine: (embed-certs-889918) DBG | unable to find current IP address of domain embed-certs-889918 in network mk-embed-certs-889918
	I0719 19:34:29.062056   68617 main.go:141] libmachine: (embed-certs-889918) DBG | I0719 19:34:29.061975   69727 retry.go:31] will retry after 3.928406856s: waiting for machine to come up
	I0719 19:34:29.232943   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:34:29.233002   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:34.468635   68995 start.go:364] duration metric: took 3m32.362055744s to acquireMachinesLock for "old-k8s-version-570369"
	I0719 19:34:34.468688   68995 start.go:96] Skipping create...Using existing machine configuration
	I0719 19:34:34.468696   68995 fix.go:54] fixHost starting: 
	I0719 19:34:34.469206   68995 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:34.469246   68995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:34.488843   68995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36477
	I0719 19:34:34.489238   68995 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:34.489682   68995 main.go:141] libmachine: Using API Version  1
	I0719 19:34:34.489708   68995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:34.490007   68995 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:34.490191   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:34.490327   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetState
	I0719 19:34:34.491830   68995 fix.go:112] recreateIfNeeded on old-k8s-version-570369: state=Stopped err=<nil>
	I0719 19:34:34.491875   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	W0719 19:34:34.492029   68995 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 19:34:34.493991   68995 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-570369" ...
	I0719 19:34:32.995044   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:32.995539   68617 main.go:141] libmachine: (embed-certs-889918) Found IP for machine: 192.168.61.107
	I0719 19:34:32.995571   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has current primary IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:32.995583   68617 main.go:141] libmachine: (embed-certs-889918) Reserving static IP address...
	I0719 19:34:32.996234   68617 main.go:141] libmachine: (embed-certs-889918) Reserved static IP address: 192.168.61.107
	I0719 19:34:32.996289   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "embed-certs-889918", mac: "52:54:00:5d:44:ce", ip: "192.168.61.107"} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:32.996409   68617 main.go:141] libmachine: (embed-certs-889918) Waiting for SSH to be available...
	I0719 19:34:32.996438   68617 main.go:141] libmachine: (embed-certs-889918) DBG | skip adding static IP to network mk-embed-certs-889918 - found existing host DHCP lease matching {name: "embed-certs-889918", mac: "52:54:00:5d:44:ce", ip: "192.168.61.107"}
	I0719 19:34:32.996476   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Getting to WaitForSSH function...
	I0719 19:34:32.998802   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:32.999144   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:32.999166   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:32.999286   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Using SSH client type: external
	I0719 19:34:32.999310   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Using SSH private key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa (-rw-------)
	I0719 19:34:32.999352   68617 main.go:141] libmachine: (embed-certs-889918) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 19:34:32.999371   68617 main.go:141] libmachine: (embed-certs-889918) DBG | About to run SSH command:
	I0719 19:34:32.999404   68617 main.go:141] libmachine: (embed-certs-889918) DBG | exit 0
	I0719 19:34:33.128231   68617 main.go:141] libmachine: (embed-certs-889918) DBG | SSH cmd err, output: <nil>: 
	I0719 19:34:33.128662   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetConfigRaw
	I0719 19:34:33.129434   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetIP
	I0719 19:34:33.132282   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.132578   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.132613   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.132825   68617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/config.json ...
	I0719 19:34:33.133051   68617 machine.go:94] provisionDockerMachine start ...
	I0719 19:34:33.133071   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:33.133272   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:33.135294   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.135617   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.135645   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.135727   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:33.135898   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.136064   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.136208   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:33.136358   68617 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:33.136534   68617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0719 19:34:33.136545   68617 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 19:34:33.247702   68617 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 19:34:33.247736   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetMachineName
	I0719 19:34:33.248042   68617 buildroot.go:166] provisioning hostname "embed-certs-889918"
	I0719 19:34:33.248072   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetMachineName
	I0719 19:34:33.248288   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:33.251010   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.251506   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.251536   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.251739   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:33.251936   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.252136   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.252329   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:33.252586   68617 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:33.252822   68617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0719 19:34:33.252848   68617 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-889918 && echo "embed-certs-889918" | sudo tee /etc/hostname
	I0719 19:34:33.379987   68617 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-889918
	
	I0719 19:34:33.380013   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:33.382728   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.383037   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.383059   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.383212   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:33.383482   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.383662   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.383805   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:33.384004   68617 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:33.384193   68617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0719 19:34:33.384211   68617 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-889918' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-889918/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-889918' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 19:34:33.503798   68617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:34:33.503827   68617 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 19:34:33.503870   68617 buildroot.go:174] setting up certificates
	I0719 19:34:33.503884   68617 provision.go:84] configureAuth start
	I0719 19:34:33.503896   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetMachineName
	I0719 19:34:33.504125   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetIP
	I0719 19:34:33.506371   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.506777   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.506809   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.506914   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:33.509368   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.509751   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.509783   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.509892   68617 provision.go:143] copyHostCerts
	I0719 19:34:33.509957   68617 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 19:34:33.509967   68617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 19:34:33.510035   68617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 19:34:33.510150   68617 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 19:34:33.510163   68617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 19:34:33.510194   68617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 19:34:33.510269   68617 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 19:34:33.510280   68617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 19:34:33.510306   68617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 19:34:33.510369   68617 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.embed-certs-889918 san=[127.0.0.1 192.168.61.107 embed-certs-889918 localhost minikube]
	I0719 19:34:33.799515   68617 provision.go:177] copyRemoteCerts
	I0719 19:34:33.799571   68617 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 19:34:33.799595   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:33.802495   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.802789   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.802817   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.803020   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:33.803245   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.803453   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:33.803618   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:33.889555   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 19:34:33.912176   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0719 19:34:33.934472   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 19:34:33.956877   68617 provision.go:87] duration metric: took 452.977402ms to configureAuth
	I0719 19:34:33.956906   68617 buildroot.go:189] setting minikube options for container-runtime
	I0719 19:34:33.957113   68617 config.go:182] Loaded profile config "embed-certs-889918": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:34:33.957190   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:33.960120   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.960551   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:33.960583   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:33.960761   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:33.960948   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.961094   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:33.961221   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:33.961368   68617 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:33.961587   68617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0719 19:34:33.961608   68617 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 19:34:34.222464   68617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 19:34:34.222508   68617 machine.go:97] duration metric: took 1.089428018s to provisionDockerMachine
	I0719 19:34:34.222519   68617 start.go:293] postStartSetup for "embed-certs-889918" (driver="kvm2")
	I0719 19:34:34.222530   68617 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 19:34:34.222544   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:34.222899   68617 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 19:34:34.222924   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:34.225636   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.226063   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:34.226093   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.226259   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:34.226491   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:34.226686   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:34.226918   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:34.314782   68617 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 19:34:34.318940   68617 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 19:34:34.318966   68617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 19:34:34.319044   68617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 19:34:34.319122   68617 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 19:34:34.319223   68617 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 19:34:34.328873   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:34:34.351760   68617 start.go:296] duration metric: took 129.22587ms for postStartSetup
	I0719 19:34:34.351804   68617 fix.go:56] duration metric: took 20.367059026s for fixHost
	I0719 19:34:34.351821   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:34.354759   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.355195   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:34.355228   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.355405   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:34.355619   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:34.355821   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:34.355994   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:34.356154   68617 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:34.356371   68617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.107 22 <nil> <nil>}
	I0719 19:34:34.356386   68617 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 19:34:34.468486   68617 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721417674.441309314
	
	I0719 19:34:34.468508   68617 fix.go:216] guest clock: 1721417674.441309314
	I0719 19:34:34.468514   68617 fix.go:229] Guest: 2024-07-19 19:34:34.441309314 +0000 UTC Remote: 2024-07-19 19:34:34.35180736 +0000 UTC m=+267.913178040 (delta=89.501954ms)
	I0719 19:34:34.468531   68617 fix.go:200] guest clock delta is within tolerance: 89.501954ms
	I0719 19:34:34.468535   68617 start.go:83] releasing machines lock for "embed-certs-889918", held for 20.483881359s
	I0719 19:34:34.468559   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:34.468844   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetIP
	I0719 19:34:34.471920   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.472311   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:34.472346   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.472517   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:34.473026   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:34.473229   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:34.473305   68617 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 19:34:34.473364   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:34.473484   68617 ssh_runner.go:195] Run: cat /version.json
	I0719 19:34:34.473513   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:34.476087   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.476262   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.476545   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:34.476571   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.476614   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:34.476834   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:34.476835   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:34.476988   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:34.477070   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:34.477226   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:34.477293   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:34.477383   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:34.477453   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:34.477536   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:34.560612   68617 ssh_runner.go:195] Run: systemctl --version
	I0719 19:34:34.593072   68617 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 19:34:34.734988   68617 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 19:34:34.741527   68617 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 19:34:34.741637   68617 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 19:34:34.758845   68617 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 19:34:34.758869   68617 start.go:495] detecting cgroup driver to use...
	I0719 19:34:34.758924   68617 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 19:34:34.774350   68617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 19:34:34.787003   68617 docker.go:217] disabling cri-docker service (if available) ...
	I0719 19:34:34.787074   68617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 19:34:34.801080   68617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 19:34:34.815679   68617 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 19:34:34.940970   68617 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 19:34:35.101668   68617 docker.go:233] disabling docker service ...
	I0719 19:34:35.101744   68617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 19:34:35.116074   68617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 19:34:35.132085   68617 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 19:34:35.254387   68617 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 19:34:35.380270   68617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 19:34:35.394387   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 19:34:35.411794   68617 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 19:34:35.411877   68617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.421927   68617 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 19:34:35.421978   68617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.432790   68617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.445416   68617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.457170   68617 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 19:34:35.467762   68617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.478775   68617 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.498894   68617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:35.509643   68617 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 19:34:35.520089   68617 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 19:34:35.520138   68617 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 19:34:35.532934   68617 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 19:34:35.543249   68617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:34:35.702678   68617 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 19:34:35.843858   68617 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 19:34:35.843944   68617 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 19:34:35.848599   68617 start.go:563] Will wait 60s for crictl version
	I0719 19:34:35.848677   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:34:35.852436   68617 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 19:34:35.889698   68617 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 19:34:35.889782   68617 ssh_runner.go:195] Run: crio --version
	I0719 19:34:35.917297   68617 ssh_runner.go:195] Run: crio --version
	I0719 19:34:35.948248   68617 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 19:34:35.949617   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetIP
	I0719 19:34:35.952938   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:35.953304   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:35.953340   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:35.953554   68617 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0719 19:34:35.957815   68617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:34:35.971246   68617 kubeadm.go:883] updating cluster {Name:embed-certs-889918 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-889918 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 19:34:35.971572   68617 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 19:34:35.971628   68617 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:34:36.012019   68617 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0719 19:34:36.012093   68617 ssh_runner.go:195] Run: which lz4
	I0719 19:34:36.017026   68617 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 19:34:36.021166   68617 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 19:34:36.021200   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0719 19:34:34.495371   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .Start
	I0719 19:34:34.495553   68995 main.go:141] libmachine: (old-k8s-version-570369) Ensuring networks are active...
	I0719 19:34:34.496391   68995 main.go:141] libmachine: (old-k8s-version-570369) Ensuring network default is active
	I0719 19:34:34.496864   68995 main.go:141] libmachine: (old-k8s-version-570369) Ensuring network mk-old-k8s-version-570369 is active
	I0719 19:34:34.497347   68995 main.go:141] libmachine: (old-k8s-version-570369) Getting domain xml...
	I0719 19:34:34.498147   68995 main.go:141] libmachine: (old-k8s-version-570369) Creating domain...
	I0719 19:34:35.772867   68995 main.go:141] libmachine: (old-k8s-version-570369) Waiting to get IP...
	I0719 19:34:35.773913   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:35.774431   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:35.774493   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:35.774403   69904 retry.go:31] will retry after 240.258158ms: waiting for machine to come up
	I0719 19:34:36.016002   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:36.016573   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:36.016602   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:36.016519   69904 retry.go:31] will retry after 386.689334ms: waiting for machine to come up
	I0719 19:34:36.405411   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:36.405914   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:36.405957   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:36.405885   69904 retry.go:31] will retry after 295.472664ms: waiting for machine to come up
	I0719 19:34:36.703587   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:36.704082   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:36.704113   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:36.704040   69904 retry.go:31] will retry after 436.936059ms: waiting for machine to come up
	I0719 19:34:34.233826   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:34:34.233873   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:37.351373   68617 crio.go:462] duration metric: took 1.334375789s to copy over tarball
	I0719 19:34:37.351439   68617 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 19:34:39.633138   68617 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.281647989s)
	I0719 19:34:39.633178   68617 crio.go:469] duration metric: took 2.281778927s to extract the tarball
	I0719 19:34:39.633187   68617 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 19:34:39.671219   68617 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:34:39.716854   68617 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 19:34:39.716878   68617 cache_images.go:84] Images are preloaded, skipping loading
	I0719 19:34:39.716885   68617 kubeadm.go:934] updating node { 192.168.61.107 8443 v1.30.3 crio true true} ...
	I0719 19:34:39.717014   68617 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-889918 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-889918 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 19:34:39.717090   68617 ssh_runner.go:195] Run: crio config
	I0719 19:34:39.764571   68617 cni.go:84] Creating CNI manager for ""
	I0719 19:34:39.764597   68617 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:34:39.764610   68617 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 19:34:39.764633   68617 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.107 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-889918 NodeName:embed-certs-889918 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 19:34:39.764796   68617 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-889918"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 19:34:39.764873   68617 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 19:34:39.774416   68617 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 19:34:39.774479   68617 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 19:34:39.783899   68617 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0719 19:34:39.799538   68617 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 19:34:39.815241   68617 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0719 19:34:39.831507   68617 ssh_runner.go:195] Run: grep 192.168.61.107	control-plane.minikube.internal$ /etc/hosts
	I0719 19:34:39.835145   68617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:34:39.846125   68617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:34:39.958886   68617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:34:39.975440   68617 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918 for IP: 192.168.61.107
	I0719 19:34:39.975469   68617 certs.go:194] generating shared ca certs ...
	I0719 19:34:39.975488   68617 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:34:39.975688   68617 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 19:34:39.975742   68617 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 19:34:39.975755   68617 certs.go:256] generating profile certs ...
	I0719 19:34:39.975891   68617 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/client.key
	I0719 19:34:39.975975   68617 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/apiserver.key.edbb2bee
	I0719 19:34:39.976027   68617 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/proxy-client.key
	I0719 19:34:39.976166   68617 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 19:34:39.976206   68617 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 19:34:39.976214   68617 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 19:34:39.976242   68617 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 19:34:39.976267   68617 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 19:34:39.976317   68617 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 19:34:39.976368   68617 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:34:39.976983   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 19:34:40.011234   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 19:34:40.057204   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 19:34:40.079930   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 19:34:40.102045   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0719 19:34:40.124479   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 19:34:40.153087   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 19:34:40.183427   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/embed-certs-889918/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 19:34:40.206893   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 19:34:40.230582   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 19:34:40.254384   68617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 19:34:40.278454   68617 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 19:34:40.297538   68617 ssh_runner.go:195] Run: openssl version
	I0719 19:34:40.303579   68617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 19:34:40.315682   68617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 19:34:40.320283   68617 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 19:34:40.320356   68617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 19:34:40.326353   68617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 19:34:40.338564   68617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 19:34:40.349901   68617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:34:40.354229   68617 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:34:40.354291   68617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:34:40.359773   68617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 19:34:40.370212   68617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 19:34:40.383548   68617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 19:34:40.388929   68617 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 19:34:40.388981   68617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 19:34:40.396108   68617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 19:34:40.409065   68617 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 19:34:40.413154   68617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 19:34:40.418763   68617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 19:34:40.426033   68617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 19:34:40.431826   68617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 19:34:40.437469   68617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 19:34:40.443195   68617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 19:34:40.448932   68617 kubeadm.go:392] StartCluster: {Name:embed-certs-889918 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-889918 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:34:40.449086   68617 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 19:34:40.449153   68617 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:34:40.483766   68617 cri.go:89] found id: ""
	I0719 19:34:40.483867   68617 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 19:34:40.493593   68617 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 19:34:40.493618   68617 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 19:34:40.493668   68617 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 19:34:40.502750   68617 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 19:34:40.503917   68617 kubeconfig.go:125] found "embed-certs-889918" server: "https://192.168.61.107:8443"
	I0719 19:34:40.505701   68617 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 19:34:40.516036   68617 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.107
	I0719 19:34:40.516067   68617 kubeadm.go:1160] stopping kube-system containers ...
	I0719 19:34:40.516077   68617 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 19:34:40.516144   68617 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:34:40.562686   68617 cri.go:89] found id: ""
	I0719 19:34:40.562771   68617 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 19:34:40.578349   68617 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:34:40.587316   68617 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:34:40.587349   68617 kubeadm.go:157] found existing configuration files:
	
	I0719 19:34:40.587421   68617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:34:40.597112   68617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:34:40.597165   68617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:34:40.606941   68617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:34:40.616452   68617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:34:40.616495   68617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:34:40.626308   68617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:34:40.635642   68617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:34:40.635683   68617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:34:40.645499   68617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:34:40.655029   68617 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:34:40.655080   68617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:34:40.665113   68617 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:34:40.673843   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:40.785511   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:37.142840   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:37.143334   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:37.143364   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:37.143287   69904 retry.go:31] will retry after 500.03346ms: waiting for machine to come up
	I0719 19:34:37.645270   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:37.645920   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:37.645949   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:37.645856   69904 retry.go:31] will retry after 803.488078ms: waiting for machine to come up
	I0719 19:34:38.450546   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:38.451043   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:38.451062   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:38.451000   69904 retry.go:31] will retry after 791.584627ms: waiting for machine to come up
	I0719 19:34:39.244154   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:39.244564   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:39.244597   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:39.244541   69904 retry.go:31] will retry after 1.090004215s: waiting for machine to come up
	I0719 19:34:40.335726   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:40.336278   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:40.336304   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:40.336240   69904 retry.go:31] will retry after 1.629231127s: waiting for machine to come up
	I0719 19:34:41.966947   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:41.967475   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:41.967497   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:41.967436   69904 retry.go:31] will retry after 2.06824211s: waiting for machine to come up
	I0719 19:34:39.234200   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:34:39.234245   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:41.592081   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:41.803419   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:41.898042   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:42.005198   68617 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:34:42.005290   68617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:42.505705   68617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:43.005532   68617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:43.505668   68617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:34:43.519708   68617 api_server.go:72] duration metric: took 1.514509053s to wait for apiserver process to appear ...
	I0719 19:34:43.519740   68617 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:34:43.519763   68617 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0719 19:34:46.288947   68617 api_server.go:279] https://192.168.61.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:34:46.288983   68617 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:34:46.289000   68617 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0719 19:34:46.306114   68617 api_server.go:279] https://192.168.61.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:34:46.306137   68617 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:34:44.038853   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:44.039330   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:44.039352   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:44.039303   69904 retry.go:31] will retry after 2.688374782s: waiting for machine to come up
	I0719 19:34:46.730865   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:46.731354   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:46.731377   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:46.731316   69904 retry.go:31] will retry after 3.054833322s: waiting for machine to come up
	I0719 19:34:46.519995   68617 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0719 19:34:46.524219   68617 api_server.go:279] https://192.168.61.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:34:46.524260   68617 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:34:47.020849   68617 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0719 19:34:47.029117   68617 api_server.go:279] https://192.168.61.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:34:47.029145   68617 api_server.go:103] status: https://192.168.61.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:34:47.520815   68617 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0719 19:34:47.525739   68617 api_server.go:279] https://192.168.61.107:8443/healthz returned 200:
	ok
	I0719 19:34:47.532240   68617 api_server.go:141] control plane version: v1.30.3
	I0719 19:34:47.532277   68617 api_server.go:131] duration metric: took 4.012528013s to wait for apiserver health ...
	I0719 19:34:47.532290   68617 cni.go:84] Creating CNI manager for ""
	I0719 19:34:47.532300   68617 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:34:47.533969   68617 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 19:34:44.235220   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:34:44.235292   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:44.372339   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": read tcp 192.168.72.1:52608->192.168.72.104:8444: read: connection reset by peer
	I0719 19:34:44.732722   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:44.733383   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": dial tcp 192.168.72.104:8444: connect: connection refused
	I0719 19:34:45.232769   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:47.535361   68617 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 19:34:47.545742   68617 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 19:34:47.570944   68617 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:34:47.582007   68617 system_pods.go:59] 8 kube-system pods found
	I0719 19:34:47.582046   68617 system_pods.go:61] "coredns-7db6d8ff4d-7h48w" [dd691a13-e571-4148-bef0-ba0552277312] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 19:34:47.582065   68617 system_pods.go:61] "etcd-embed-certs-889918" [68eae3c2-11de-4c70-bc28-1f34e1e868fc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 19:34:47.582075   68617 system_pods.go:61] "kube-apiserver-embed-certs-889918" [48f50846-bc9e-444e-a569-98d28550dcff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 19:34:47.582083   68617 system_pods.go:61] "kube-controller-manager-embed-certs-889918" [92e4da69-acef-44a2-9301-726c70294727] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 19:34:47.582091   68617 system_pods.go:61] "kube-proxy-5fcgs" [67ef345b-f2ef-4e18-a55c-91e2c08a37b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0719 19:34:47.582097   68617 system_pods.go:61] "kube-scheduler-embed-certs-889918" [3560778a-031a-486f-9f53-77c243737daf] Running
	I0719 19:34:47.582113   68617 system_pods.go:61] "metrics-server-569cc877fc-sfc85" [fc2a1efe-1728-42d9-bd3d-9eb29af783e9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:34:47.582121   68617 system_pods.go:61] "storage-provisioner" [32274055-6a2d-4cf9-8ae8-3c5f7cdc66db] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 19:34:47.582131   68617 system_pods.go:74] duration metric: took 11.164178ms to wait for pod list to return data ...
	I0719 19:34:47.582153   68617 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:34:47.588678   68617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:34:47.588713   68617 node_conditions.go:123] node cpu capacity is 2
	I0719 19:34:47.588732   68617 node_conditions.go:105] duration metric: took 6.572921ms to run NodePressure ...
	I0719 19:34:47.588751   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:34:47.869129   68617 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 19:34:47.875206   68617 kubeadm.go:739] kubelet initialised
	I0719 19:34:47.875228   68617 kubeadm.go:740] duration metric: took 6.075227ms waiting for restarted kubelet to initialise ...
	I0719 19:34:47.875239   68617 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:34:47.883454   68617 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:47.889101   68617 pod_ready.go:97] node "embed-certs-889918" hosting pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.889128   68617 pod_ready.go:81] duration metric: took 5.645208ms for pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:47.889149   68617 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-889918" hosting pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.889167   68617 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:47.894286   68617 pod_ready.go:97] node "embed-certs-889918" hosting pod "etcd-embed-certs-889918" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.894312   68617 pod_ready.go:81] duration metric: took 5.135667ms for pod "etcd-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:47.894321   68617 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-889918" hosting pod "etcd-embed-certs-889918" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.894327   68617 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:47.899401   68617 pod_ready.go:97] node "embed-certs-889918" hosting pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.899428   68617 pod_ready.go:81] duration metric: took 5.092368ms for pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:47.899435   68617 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-889918" hosting pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.899441   68617 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:47.976133   68617 pod_ready.go:97] node "embed-certs-889918" hosting pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.976170   68617 pod_ready.go:81] duration metric: took 76.718663ms for pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:47.976183   68617 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-889918" hosting pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:47.976193   68617 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5fcgs" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:48.374196   68617 pod_ready.go:97] node "embed-certs-889918" hosting pod "kube-proxy-5fcgs" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:48.374232   68617 pod_ready.go:81] duration metric: took 398.025945ms for pod "kube-proxy-5fcgs" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:48.374242   68617 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-889918" hosting pod "kube-proxy-5fcgs" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:48.374250   68617 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:48.574541   68617 pod_ready.go:97] error getting pod "kube-scheduler-embed-certs-889918" in "kube-system" namespace (skipping!): pods "kube-scheduler-embed-certs-889918" not found
	I0719 19:34:48.574575   68617 pod_ready.go:81] duration metric: took 200.319358ms for pod "kube-scheduler-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:48.574585   68617 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-embed-certs-889918" in "kube-system" namespace (skipping!): pods "kube-scheduler-embed-certs-889918" not found
	I0719 19:34:48.574591   68617 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:48.974614   68617 pod_ready.go:97] node "embed-certs-889918" hosting pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:48.974640   68617 pod_ready.go:81] duration metric: took 400.042539ms for pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace to be "Ready" ...
	E0719 19:34:48.974648   68617 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-889918" hosting pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:48.974655   68617 pod_ready.go:38] duration metric: took 1.099407444s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:34:48.974672   68617 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 19:34:48.985044   68617 ops.go:34] apiserver oom_adj: -16
	I0719 19:34:48.985072   68617 kubeadm.go:597] duration metric: took 8.491446525s to restartPrimaryControlPlane
	I0719 19:34:48.985084   68617 kubeadm.go:394] duration metric: took 8.536160393s to StartCluster
	I0719 19:34:48.985117   68617 settings.go:142] acquiring lock: {Name:mkcf95271f2ac91a55c2f4ae0f8a0ccaa24777be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:34:48.985216   68617 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:34:48.986883   68617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/kubeconfig: {Name:mk0bbb5f0de0e816a151363ad4427bbf9de52f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:34:48.987110   68617 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.107 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 19:34:48.987179   68617 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 19:34:48.987252   68617 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-889918"
	I0719 19:34:48.987271   68617 addons.go:69] Setting default-storageclass=true in profile "embed-certs-889918"
	I0719 19:34:48.987287   68617 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-889918"
	W0719 19:34:48.987297   68617 addons.go:243] addon storage-provisioner should already be in state true
	I0719 19:34:48.987303   68617 addons.go:69] Setting metrics-server=true in profile "embed-certs-889918"
	I0719 19:34:48.987319   68617 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-889918"
	I0719 19:34:48.987331   68617 config.go:182] Loaded profile config "embed-certs-889918": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:34:48.987352   68617 host.go:66] Checking if "embed-certs-889918" exists ...
	I0719 19:34:48.987335   68617 addons.go:234] Setting addon metrics-server=true in "embed-certs-889918"
	W0719 19:34:48.987368   68617 addons.go:243] addon metrics-server should already be in state true
	I0719 19:34:48.987402   68617 host.go:66] Checking if "embed-certs-889918" exists ...
	I0719 19:34:48.987647   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:48.987684   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:48.987692   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:48.987701   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:48.987720   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:48.987723   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:48.988905   68617 out.go:177] * Verifying Kubernetes components...
	I0719 19:34:48.990196   68617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:34:49.003003   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38653
	I0719 19:34:49.003016   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46849
	I0719 19:34:49.003134   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38559
	I0719 19:34:49.003399   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.003454   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.003518   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.003947   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.003967   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.004007   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.004023   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.004100   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.004115   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.004295   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.004351   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.004444   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.004512   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetState
	I0719 19:34:49.004785   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:49.004819   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:49.005005   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:49.005037   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:49.008189   68617 addons.go:234] Setting addon default-storageclass=true in "embed-certs-889918"
	W0719 19:34:49.008212   68617 addons.go:243] addon default-storageclass should already be in state true
	I0719 19:34:49.008242   68617 host.go:66] Checking if "embed-certs-889918" exists ...
	I0719 19:34:49.008617   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:49.008649   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:49.020066   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38847
	I0719 19:34:49.020085   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40037
	I0719 19:34:49.020445   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.020502   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.020882   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.020899   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.021021   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.021049   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.021278   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.021313   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.021473   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetState
	I0719 19:34:49.021511   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetState
	I0719 19:34:49.023274   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:49.023416   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:49.025350   68617 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:34:49.025361   68617 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0719 19:34:49.025824   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32781
	I0719 19:34:49.026239   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.026584   68617 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 19:34:49.026603   68617 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 19:34:49.026621   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:49.026679   68617 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:34:49.026696   68617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 19:34:49.026712   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:49.027527   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.027544   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.027893   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.028407   68617 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:49.028439   68617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:49.030553   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:49.030579   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:49.031224   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:49.031246   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:49.031274   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:49.031279   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:49.031298   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:49.031375   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:49.031477   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:49.031659   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:49.031717   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:49.031867   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:49.032146   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:49.032524   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:49.043850   68617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37515
	I0719 19:34:49.044198   68617 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:49.044657   68617 main.go:141] libmachine: Using API Version  1
	I0719 19:34:49.044679   68617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:49.045019   68617 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:49.045222   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetState
	I0719 19:34:49.046751   68617 main.go:141] libmachine: (embed-certs-889918) Calling .DriverName
	I0719 19:34:49.046979   68617 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 19:34:49.046997   68617 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 19:34:49.047014   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHHostname
	I0719 19:34:49.049718   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:49.050184   68617 main.go:141] libmachine: (embed-certs-889918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:44:ce", ip: ""} in network mk-embed-certs-889918: {Iface:virbr1 ExpiryTime:2024-07-19 20:34:24 +0000 UTC Type:0 Mac:52:54:00:5d:44:ce Iaid: IPaddr:192.168.61.107 Prefix:24 Hostname:embed-certs-889918 Clientid:01:52:54:00:5d:44:ce}
	I0719 19:34:49.050210   68617 main.go:141] libmachine: (embed-certs-889918) DBG | domain embed-certs-889918 has defined IP address 192.168.61.107 and MAC address 52:54:00:5d:44:ce in network mk-embed-certs-889918
	I0719 19:34:49.050371   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHPort
	I0719 19:34:49.050520   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHKeyPath
	I0719 19:34:49.050670   68617 main.go:141] libmachine: (embed-certs-889918) Calling .GetSSHUsername
	I0719 19:34:49.050852   68617 sshutil.go:53] new ssh client: &{IP:192.168.61.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/embed-certs-889918/id_rsa Username:docker}
	I0719 19:34:49.167388   68617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:34:49.183694   68617 node_ready.go:35] waiting up to 6m0s for node "embed-certs-889918" to be "Ready" ...
	I0719 19:34:49.290895   68617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:34:49.298027   68617 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 19:34:49.298047   68617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0719 19:34:49.313881   68617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 19:34:49.319432   68617 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 19:34:49.319459   68617 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 19:34:49.357567   68617 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 19:34:49.357591   68617 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 19:34:49.385245   68617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 19:34:50.267101   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.267126   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.267145   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.267136   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.267209   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.267226   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.267407   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.267419   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.267427   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.267434   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.267506   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Closing plugin on server side
	I0719 19:34:50.267527   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Closing plugin on server side
	I0719 19:34:50.267546   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.267553   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.267561   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.267567   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.267576   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.267601   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.267619   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.267630   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.267630   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Closing plugin on server side
	I0719 19:34:50.267652   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.267660   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.267890   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Closing plugin on server side
	I0719 19:34:50.267920   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.267933   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.267945   68617 addons.go:475] Verifying addon metrics-server=true in "embed-certs-889918"
	I0719 19:34:50.268025   68617 main.go:141] libmachine: (embed-certs-889918) DBG | Closing plugin on server side
	I0719 19:34:50.268046   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.268053   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.273372   68617 main.go:141] libmachine: Making call to close driver server
	I0719 19:34:50.273398   68617 main.go:141] libmachine: (embed-certs-889918) Calling .Close
	I0719 19:34:50.273607   68617 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:34:50.273621   68617 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:34:50.275547   68617 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0719 19:34:50.276813   68617 addons.go:510] duration metric: took 1.289644945s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0719 19:34:51.187125   68617 node_ready.go:53] node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:49.787569   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:49.788100   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | unable to find current IP address of domain old-k8s-version-570369 in network mk-old-k8s-version-570369
	I0719 19:34:49.788136   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | I0719 19:34:49.788058   69904 retry.go:31] will retry after 4.206772692s: waiting for machine to come up
	I0719 19:34:50.233851   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:34:50.233900   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:55.468495   68019 start.go:364] duration metric: took 55.077199048s to acquireMachinesLock for "no-preload-971041"
	I0719 19:34:55.468549   68019 start.go:96] Skipping create...Using existing machine configuration
	I0719 19:34:55.468561   68019 fix.go:54] fixHost starting: 
	I0719 19:34:55.469011   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:34:55.469045   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:34:55.488854   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32977
	I0719 19:34:55.489237   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:34:55.489746   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:34:55.489769   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:34:55.490164   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:34:55.490353   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:34:55.490535   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetState
	I0719 19:34:55.492080   68019 fix.go:112] recreateIfNeeded on no-preload-971041: state=Stopped err=<nil>
	I0719 19:34:55.492100   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	W0719 19:34:55.492240   68019 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 19:34:55.494051   68019 out.go:177] * Restarting existing kvm2 VM for "no-preload-971041" ...
	I0719 19:34:53.189961   68617 node_ready.go:53] node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:55.688298   68617 node_ready.go:53] node "embed-certs-889918" has status "Ready":"False"
	I0719 19:34:53.999402   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.000105   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has current primary IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.000129   68995 main.go:141] libmachine: (old-k8s-version-570369) Found IP for machine: 192.168.39.220
	I0719 19:34:54.000144   68995 main.go:141] libmachine: (old-k8s-version-570369) Reserving static IP address...
	I0719 19:34:54.000535   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "old-k8s-version-570369", mac: "52:54:00:80:e3:2b", ip: "192.168.39.220"} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.000738   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | skip adding static IP to network mk-old-k8s-version-570369 - found existing host DHCP lease matching {name: "old-k8s-version-570369", mac: "52:54:00:80:e3:2b", ip: "192.168.39.220"}
	I0719 19:34:54.000786   68995 main.go:141] libmachine: (old-k8s-version-570369) Reserved static IP address: 192.168.39.220
	I0719 19:34:54.000799   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | Getting to WaitForSSH function...
	I0719 19:34:54.000819   68995 main.go:141] libmachine: (old-k8s-version-570369) Waiting for SSH to be available...
	I0719 19:34:54.003082   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.003489   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.003519   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.003700   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | Using SSH client type: external
	I0719 19:34:54.003733   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | Using SSH private key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa (-rw-------)
	I0719 19:34:54.003769   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.220 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 19:34:54.003786   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | About to run SSH command:
	I0719 19:34:54.003802   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | exit 0
	I0719 19:34:54.131809   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | SSH cmd err, output: <nil>: 
	I0719 19:34:54.132235   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetConfigRaw
	I0719 19:34:54.132946   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetIP
	I0719 19:34:54.135584   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.135977   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.136011   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.136256   68995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/config.json ...
	I0719 19:34:54.136443   68995 machine.go:94] provisionDockerMachine start ...
	I0719 19:34:54.136465   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:54.136815   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.139347   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.139705   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.139733   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.139951   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:54.140132   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.140272   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.140567   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:54.140754   68995 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:54.141006   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:34:54.141018   68995 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 19:34:54.251997   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 19:34:54.252035   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetMachineName
	I0719 19:34:54.252283   68995 buildroot.go:166] provisioning hostname "old-k8s-version-570369"
	I0719 19:34:54.252310   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetMachineName
	I0719 19:34:54.252568   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.255414   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.255812   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.255860   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.256041   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:54.256232   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.256399   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.256586   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:54.256793   68995 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:54.257004   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:34:54.257019   68995 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-570369 && echo "old-k8s-version-570369" | sudo tee /etc/hostname
	I0719 19:34:54.381363   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-570369
	
	I0719 19:34:54.381405   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.384406   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.384697   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.384724   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.384866   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:54.385050   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.385226   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.385399   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:54.385554   68995 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:54.385735   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:34:54.385752   68995 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-570369' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-570369/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-570369' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 19:34:54.504713   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:34:54.504740   68995 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 19:34:54.504778   68995 buildroot.go:174] setting up certificates
	I0719 19:34:54.504786   68995 provision.go:84] configureAuth start
	I0719 19:34:54.504796   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetMachineName
	I0719 19:34:54.505040   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetIP
	I0719 19:34:54.507978   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.508375   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.508407   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.508594   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.510609   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.510923   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.510952   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.511003   68995 provision.go:143] copyHostCerts
	I0719 19:34:54.511068   68995 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 19:34:54.511079   68995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 19:34:54.511147   68995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 19:34:54.511250   68995 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 19:34:54.511260   68995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 19:34:54.511290   68995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 19:34:54.511365   68995 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 19:34:54.511374   68995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 19:34:54.511404   68995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 19:34:54.511470   68995 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-570369 san=[127.0.0.1 192.168.39.220 localhost minikube old-k8s-version-570369]
	I0719 19:34:54.759014   68995 provision.go:177] copyRemoteCerts
	I0719 19:34:54.759064   68995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 19:34:54.759089   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.761770   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.762089   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.762133   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.762262   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:54.762483   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.762633   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:54.762744   68995 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa Username:docker}
	I0719 19:34:54.849819   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0719 19:34:54.874113   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 19:34:54.897979   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 19:34:54.921635   68995 provision.go:87] duration metric: took 416.834667ms to configureAuth
	I0719 19:34:54.921669   68995 buildroot.go:189] setting minikube options for container-runtime
	I0719 19:34:54.921904   68995 config.go:182] Loaded profile config "old-k8s-version-570369": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0719 19:34:54.922004   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:54.924798   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.925091   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:54.925121   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:54.925308   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:54.925611   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.925801   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:54.925980   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:54.926180   68995 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:54.926442   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:34:54.926471   68995 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 19:34:55.219746   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 19:34:55.219774   68995 machine.go:97] duration metric: took 1.083316626s to provisionDockerMachine
	I0719 19:34:55.219788   68995 start.go:293] postStartSetup for "old-k8s-version-570369" (driver="kvm2")
	I0719 19:34:55.219802   68995 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 19:34:55.219823   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:55.220207   68995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 19:34:55.220250   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:55.223470   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.223803   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:55.223847   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.224015   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:55.224253   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:55.224507   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:55.224686   68995 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa Username:docker}
	I0719 19:34:55.314430   68995 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 19:34:55.318435   68995 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 19:34:55.318461   68995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 19:34:55.318573   68995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 19:34:55.318696   68995 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 19:34:55.318818   68995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 19:34:55.329180   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:34:55.351452   68995 start.go:296] duration metric: took 131.650561ms for postStartSetup
	I0719 19:34:55.351493   68995 fix.go:56] duration metric: took 20.882796261s for fixHost
	I0719 19:34:55.351516   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:55.354069   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.354429   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:55.354449   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.354612   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:55.354786   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:55.354936   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:55.355083   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:55.355295   68995 main.go:141] libmachine: Using SSH client type: native
	I0719 19:34:55.355516   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0719 19:34:55.355528   68995 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 19:34:55.468304   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721417695.444458011
	
	I0719 19:34:55.468347   68995 fix.go:216] guest clock: 1721417695.444458011
	I0719 19:34:55.468357   68995 fix.go:229] Guest: 2024-07-19 19:34:55.444458011 +0000 UTC Remote: 2024-07-19 19:34:55.351498388 +0000 UTC m=+233.383860409 (delta=92.959623ms)
	I0719 19:34:55.468400   68995 fix.go:200] guest clock delta is within tolerance: 92.959623ms
	I0719 19:34:55.468407   68995 start.go:83] releasing machines lock for "old-k8s-version-570369", held for 20.999740081s
	I0719 19:34:55.468436   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:55.468696   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetIP
	I0719 19:34:55.471650   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.472087   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:55.472126   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.472376   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:55.472870   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:55.473063   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .DriverName
	I0719 19:34:55.473145   68995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 19:34:55.473190   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:55.473364   68995 ssh_runner.go:195] Run: cat /version.json
	I0719 19:34:55.473392   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHHostname
	I0719 19:34:55.476321   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.476682   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.476716   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:55.476736   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.476847   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:55.477024   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:55.477123   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:55.477149   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:55.477183   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:55.477291   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHPort
	I0719 19:34:55.477341   68995 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa Username:docker}
	I0719 19:34:55.477440   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHKeyPath
	I0719 19:34:55.477555   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetSSHUsername
	I0719 19:34:55.477729   68995 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/old-k8s-version-570369/id_rsa Username:docker}
	I0719 19:34:55.573427   68995 ssh_runner.go:195] Run: systemctl --version
	I0719 19:34:55.602019   68995 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 19:34:55.759356   68995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 19:34:55.766310   68995 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 19:34:55.766406   68995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 19:34:55.786237   68995 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 19:34:55.786261   68995 start.go:495] detecting cgroup driver to use...
	I0719 19:34:55.786335   68995 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 19:34:55.804512   68995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 19:34:55.818026   68995 docker.go:217] disabling cri-docker service (if available) ...
	I0719 19:34:55.818081   68995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 19:34:55.832737   68995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 19:34:55.848018   68995 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 19:34:55.972092   68995 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 19:34:56.119305   68995 docker.go:233] disabling docker service ...
	I0719 19:34:56.119390   68995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 19:34:56.132814   68995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 19:34:56.145281   68995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 19:34:56.285671   68995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 19:34:56.413786   68995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 19:34:56.428220   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 19:34:56.447626   68995 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0719 19:34:56.447679   68995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:56.458325   68995 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 19:34:56.458419   68995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:56.467977   68995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:56.477127   68995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:34:56.487342   68995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 19:34:56.499618   68995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 19:34:56.510085   68995 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 19:34:56.510149   68995 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 19:34:56.524045   68995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 19:34:56.533806   68995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:34:56.671207   68995 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 19:34:56.844623   68995 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 19:34:56.844688   68995 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 19:34:56.849762   68995 start.go:563] Will wait 60s for crictl version
	I0719 19:34:56.849829   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:34:56.853768   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 19:34:56.896644   68995 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 19:34:56.896768   68995 ssh_runner.go:195] Run: crio --version
	I0719 19:34:56.924343   68995 ssh_runner.go:195] Run: crio --version
	I0719 19:34:56.952939   68995 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0719 19:34:56.954228   68995 main.go:141] libmachine: (old-k8s-version-570369) Calling .GetIP
	I0719 19:34:56.957173   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:56.957539   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e3:2b", ip: ""} in network mk-old-k8s-version-570369: {Iface:virbr2 ExpiryTime:2024-07-19 20:34:44 +0000 UTC Type:0 Mac:52:54:00:80:e3:2b Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:old-k8s-version-570369 Clientid:01:52:54:00:80:e3:2b}
	I0719 19:34:56.957571   68995 main.go:141] libmachine: (old-k8s-version-570369) DBG | domain old-k8s-version-570369 has defined IP address 192.168.39.220 and MAC address 52:54:00:80:e3:2b in network mk-old-k8s-version-570369
	I0719 19:34:56.957784   68995 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 19:34:56.961877   68995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:34:56.973917   68995 kubeadm.go:883] updating cluster {Name:old-k8s-version-570369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-570369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 19:34:56.974058   68995 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 19:34:56.974286   68995 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:34:55.495295   68019 main.go:141] libmachine: (no-preload-971041) Calling .Start
	I0719 19:34:55.495475   68019 main.go:141] libmachine: (no-preload-971041) Ensuring networks are active...
	I0719 19:34:55.496223   68019 main.go:141] libmachine: (no-preload-971041) Ensuring network default is active
	I0719 19:34:55.496554   68019 main.go:141] libmachine: (no-preload-971041) Ensuring network mk-no-preload-971041 is active
	I0719 19:34:55.496971   68019 main.go:141] libmachine: (no-preload-971041) Getting domain xml...
	I0719 19:34:55.497761   68019 main.go:141] libmachine: (no-preload-971041) Creating domain...
	I0719 19:34:56.870332   68019 main.go:141] libmachine: (no-preload-971041) Waiting to get IP...
	I0719 19:34:56.871429   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:56.871920   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:56.871979   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:56.871888   70115 retry.go:31] will retry after 201.799148ms: waiting for machine to come up
	I0719 19:34:57.075216   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:57.075744   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:57.075774   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:57.075692   70115 retry.go:31] will retry after 311.790253ms: waiting for machine to come up
	I0719 19:34:57.389280   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:57.389811   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:57.389836   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:57.389752   70115 retry.go:31] will retry after 434.42849ms: waiting for machine to come up
	I0719 19:34:57.826106   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:57.826558   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:57.826591   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:57.826524   70115 retry.go:31] will retry after 416.820957ms: waiting for machine to come up
	I0719 19:34:55.234692   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:34:55.234729   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:34:56.690525   68617 node_ready.go:49] node "embed-certs-889918" has status "Ready":"True"
	I0719 19:34:56.690548   68617 node_ready.go:38] duration metric: took 7.506818962s for node "embed-certs-889918" to be "Ready" ...
	I0719 19:34:56.690557   68617 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:34:56.697237   68617 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:56.702754   68617 pod_ready.go:92] pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace has status "Ready":"True"
	I0719 19:34:56.702775   68617 pod_ready.go:81] duration metric: took 5.512702ms for pod "coredns-7db6d8ff4d-7h48w" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:56.702784   68617 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:58.709004   68617 pod_ready.go:102] pod "etcd-embed-certs-889918" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:00.709279   68617 pod_ready.go:92] pod "etcd-embed-certs-889918" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:00.709305   68617 pod_ready.go:81] duration metric: took 4.00651453s for pod "etcd-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.709317   68617 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.715390   68617 pod_ready.go:92] pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:00.715416   68617 pod_ready.go:81] duration metric: took 6.089348ms for pod "kube-apiserver-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.715428   68617 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.720718   68617 pod_ready.go:92] pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:00.720754   68617 pod_ready.go:81] duration metric: took 5.316049ms for pod "kube-controller-manager-embed-certs-889918" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.720768   68617 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5fcgs" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.725631   68617 pod_ready.go:92] pod "kube-proxy-5fcgs" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:00.725660   68617 pod_ready.go:81] duration metric: took 4.884075ms for pod "kube-proxy-5fcgs" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:00.725671   68617 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace to be "Ready" ...
	I0719 19:34:57.028362   68995 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 19:34:57.028462   68995 ssh_runner.go:195] Run: which lz4
	I0719 19:34:57.032617   68995 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 19:34:57.037102   68995 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 19:34:57.037142   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0719 19:34:58.509147   68995 crio.go:462] duration metric: took 1.476562763s to copy over tarball
	I0719 19:34:58.509223   68995 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 19:35:01.576920   68995 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.067667619s)
	I0719 19:35:01.576949   68995 crio.go:469] duration metric: took 3.067772722s to extract the tarball
	I0719 19:35:01.576959   68995 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 19:35:01.622394   68995 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:35:01.658103   68995 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 19:35:01.658135   68995 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 19:35:01.658233   68995 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:01.658257   68995 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:35:01.658262   68995 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:35:01.658236   68995 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0719 19:35:01.658235   68995 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0719 19:35:01.658233   68995 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:35:01.658247   68995 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:35:01.658433   68995 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0719 19:35:01.660336   68995 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0719 19:35:01.660361   68995 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:35:01.660343   68995 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:35:01.660333   68995 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:35:01.660339   68995 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0719 19:35:01.660340   68995 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:35:01.660333   68995 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:01.660714   68995 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0719 19:35:01.895704   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0719 19:35:01.907901   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:35:01.908071   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:35:01.923590   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0719 19:35:01.923831   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:35:01.955327   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:35:01.983003   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0719 19:35:00.235814   68536 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 19:35:00.235880   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:35:00.893649   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:35:00.893683   68536 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:35:00.893697   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:35:00.916935   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:35:00.916965   68536 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:35:01.233380   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:35:01.242031   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:35:01.242062   68536 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:35:01.732567   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:35:01.737153   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:35:01.737201   68536 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:35:02.232697   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:35:02.238729   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:35:02.238763   68536 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:35:02.733474   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:35:02.738862   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 200:
	ok
	I0719 19:35:02.746708   68536 api_server.go:141] control plane version: v1.30.3
	I0719 19:35:02.746736   68536 api_server.go:131] duration metric: took 38.514241141s to wait for apiserver health ...
	I0719 19:35:02.746748   68536 cni.go:84] Creating CNI manager for ""
	I0719 19:35:02.746757   68536 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:34:58.245957   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:58.246426   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:58.246451   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:58.246364   70115 retry.go:31] will retry after 682.234308ms: waiting for machine to come up
	I0719 19:34:58.930013   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:58.930623   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:58.930642   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:58.930578   70115 retry.go:31] will retry after 798.20186ms: waiting for machine to come up
	I0719 19:34:59.730628   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:34:59.731191   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:34:59.731213   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:34:59.731139   70115 retry.go:31] will retry after 911.96578ms: waiting for machine to come up
	I0719 19:35:00.644805   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:00.645291   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:35:00.645326   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:35:00.645245   70115 retry.go:31] will retry after 1.252978155s: waiting for machine to come up
	I0719 19:35:01.900101   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:01.900824   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:35:01.900848   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:35:01.900727   70115 retry.go:31] will retry after 1.662309997s: waiting for machine to come up
	I0719 19:35:02.966644   68536 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 19:35:03.080565   68536 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 19:35:03.099289   68536 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 19:35:03.123741   68536 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:35:03.243633   68536 system_pods.go:59] 8 kube-system pods found
	I0719 19:35:03.243729   68536 system_pods.go:61] "coredns-7db6d8ff4d-tngnj" [e343f406-a7bc-46e8-986a-a55d3579c8d2] Running
	I0719 19:35:03.243751   68536 system_pods.go:61] "etcd-default-k8s-diff-port-578533" [f800aab8-97bf-4080-b2ec-bac5e070cd14] Running
	I0719 19:35:03.243767   68536 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-578533" [afe884c0-eee2-4c52-b01f-c106afb9a244] Running
	I0719 19:35:03.243795   68536 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-578533" [ecece9ad-d0ed-4cad-b01e-d7254df00dfa] Running
	I0719 19:35:03.243816   68536 system_pods.go:61] "kube-proxy-cdb66" [1daaaaf5-4c7d-4c37-86f7-b52d11621ffa] Running
	I0719 19:35:03.243850   68536 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-578533" [0359836d-2099-43f7-9dae-7c54755818d4] Running
	I0719 19:35:03.243874   68536 system_pods.go:61] "metrics-server-569cc877fc-wf82d" [8ae7709a-082d-4279-9d78-44940b2bdae4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:35:03.243889   68536 system_pods.go:61] "storage-provisioner" [767f30f2-20d0-4340-b70b-0570d42fa007] Running
	I0719 19:35:03.243909   68536 system_pods.go:74] duration metric: took 120.145691ms to wait for pod list to return data ...
	I0719 19:35:03.243942   68536 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:35:02.733443   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:04.734486   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:02.004356   68995 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0719 19:35:02.004410   68995 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0719 19:35:02.004451   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.076774   68995 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0719 19:35:02.076820   68995 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:35:02.076871   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.077195   68995 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0719 19:35:02.077230   68995 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:35:02.077274   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.094076   68995 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0719 19:35:02.094105   68995 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0719 19:35:02.094118   68995 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:35:02.094148   68995 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0719 19:35:02.094169   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.094186   68995 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:35:02.094156   68995 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0719 19:35:02.094233   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.094246   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.100381   68995 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0719 19:35:02.100411   68995 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0719 19:35:02.100416   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0719 19:35:02.100449   68995 ssh_runner.go:195] Run: which crictl
	I0719 19:35:02.100545   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0719 19:35:02.100547   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0719 19:35:02.106653   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 19:35:02.106700   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0719 19:35:02.106781   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0719 19:35:02.214163   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0719 19:35:02.214249   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0719 19:35:02.216256   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0719 19:35:02.216348   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0719 19:35:02.247645   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0719 19:35:02.247723   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0719 19:35:02.247818   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0719 19:35:02.265494   68995 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0719 19:35:02.465885   68995 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:02.607013   68995 cache_images.go:92] duration metric: took 948.858667ms to LoadCachedImages
	W0719 19:35:02.607143   68995 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0719 19:35:02.607167   68995 kubeadm.go:934] updating node { 192.168.39.220 8443 v1.20.0 crio true true} ...
	I0719 19:35:02.607303   68995 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-570369 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 19:35:02.607399   68995 ssh_runner.go:195] Run: crio config
	I0719 19:35:02.661274   68995 cni.go:84] Creating CNI manager for ""
	I0719 19:35:02.661306   68995 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:35:02.661326   68995 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 19:35:02.661353   68995 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.220 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-570369 NodeName:old-k8s-version-570369 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.220"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.220 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0719 19:35:02.661525   68995 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.220
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-570369"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.220
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.220"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 19:35:02.661599   68995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0719 19:35:02.671792   68995 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 19:35:02.671878   68995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 19:35:02.683593   68995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0719 19:35:02.703448   68995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 19:35:02.723494   68995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0719 19:35:02.741880   68995 ssh_runner.go:195] Run: grep 192.168.39.220	control-plane.minikube.internal$ /etc/hosts
	I0719 19:35:02.745968   68995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.220	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:35:02.759015   68995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:35:02.894896   68995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:35:02.912243   68995 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369 for IP: 192.168.39.220
	I0719 19:35:02.912272   68995 certs.go:194] generating shared ca certs ...
	I0719 19:35:02.912295   68995 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:35:02.912468   68995 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 19:35:02.912520   68995 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 19:35:02.912532   68995 certs.go:256] generating profile certs ...
	I0719 19:35:02.912660   68995 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/client.key
	I0719 19:35:02.912732   68995 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.key.41bb110d
	I0719 19:35:02.912784   68995 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/proxy-client.key
	I0719 19:35:02.912919   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 19:35:02.912959   68995 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 19:35:02.912973   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 19:35:02.913006   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 19:35:02.913040   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 19:35:02.913070   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 19:35:02.913125   68995 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:35:02.914001   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 19:35:02.962668   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 19:35:02.997316   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 19:35:03.033868   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 19:35:03.073913   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0719 19:35:03.120607   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 19:35:03.156647   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 19:35:03.196763   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 19:35:03.233170   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 19:35:03.260949   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 19:35:03.286207   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 19:35:03.312158   68995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 19:35:03.333544   68995 ssh_runner.go:195] Run: openssl version
	I0719 19:35:03.339422   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 19:35:03.353775   68995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 19:35:03.359494   68995 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 19:35:03.359597   68995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 19:35:03.365926   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 19:35:03.380164   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 19:35:03.394069   68995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 19:35:03.398358   68995 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 19:35:03.398410   68995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 19:35:03.404158   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 19:35:03.415159   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 19:35:03.425887   68995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:35:03.430584   68995 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:35:03.430641   68995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:35:03.436201   68995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 19:35:03.448497   68995 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 19:35:03.454026   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 19:35:03.461326   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 19:35:03.467492   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 19:35:03.473959   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 19:35:03.480337   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 19:35:03.487652   68995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 19:35:03.494941   68995 kubeadm.go:392] StartCluster: {Name:old-k8s-version-570369 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-570369 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:35:03.495049   68995 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 19:35:03.495095   68995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:35:03.539527   68995 cri.go:89] found id: ""
	I0719 19:35:03.539603   68995 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 19:35:03.550212   68995 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 19:35:03.550232   68995 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 19:35:03.550290   68995 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 19:35:03.562250   68995 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 19:35:03.563624   68995 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-570369" does not appear in /home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:35:03.564665   68995 kubeconfig.go:62] /home/jenkins/minikube-integration/19307-14841/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-570369" cluster setting kubeconfig missing "old-k8s-version-570369" context setting]
	I0719 19:35:03.566027   68995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/kubeconfig: {Name:mk0bbb5f0de0e816a151363ad4427bbf9de52f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:35:03.639264   68995 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 19:35:03.651651   68995 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.220
	I0719 19:35:03.651684   68995 kubeadm.go:1160] stopping kube-system containers ...
	I0719 19:35:03.651698   68995 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 19:35:03.651753   68995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:35:03.694507   68995 cri.go:89] found id: ""
	I0719 19:35:03.694575   68995 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 19:35:03.713478   68995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:35:03.724717   68995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:35:03.724744   68995 kubeadm.go:157] found existing configuration files:
	
	I0719 19:35:03.724803   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:35:03.736032   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:35:03.736099   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:35:03.746972   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:35:03.757246   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:35:03.757384   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:35:03.766980   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:35:03.776428   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:35:03.776493   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:35:03.788238   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:35:03.799163   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:35:03.799230   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:35:03.810607   68995 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:35:03.820069   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:04.035635   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:04.678453   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:04.924967   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:05.021723   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:05.118353   68995 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:35:05.118439   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:05.618699   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:06.119206   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:06.618828   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:03.565550   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:03.566079   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:35:03.566106   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:35:03.566042   70115 retry.go:31] will retry after 1.551544013s: waiting for machine to come up
	I0719 19:35:05.119602   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:05.120126   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:35:05.120147   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:35:05.120073   70115 retry.go:31] will retry after 2.205398899s: waiting for machine to come up
	I0719 19:35:07.327911   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:07.328428   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:35:07.328451   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:35:07.328390   70115 retry.go:31] will retry after 3.255187102s: waiting for machine to come up
	I0719 19:35:03.641957   68536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:35:03.641987   68536 node_conditions.go:123] node cpu capacity is 2
	I0719 19:35:03.642001   68536 node_conditions.go:105] duration metric: took 398.042486ms to run NodePressure ...
	I0719 19:35:03.642020   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:04.658145   68536 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.016096931s)
	I0719 19:35:04.658182   68536 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 19:35:04.664125   68536 retry.go:31] will retry after 336.601067ms: kubelet not initialised
	I0719 19:35:05.007823   68536 retry.go:31] will retry after 366.441912ms: kubelet not initialised
	I0719 19:35:05.379510   68536 retry.go:31] will retry after 463.132181ms: kubelet not initialised
	I0719 19:35:05.848603   68536 retry.go:31] will retry after 1.07509848s: kubelet not initialised
	I0719 19:35:06.931674   68536 kubeadm.go:739] kubelet initialised
	I0719 19:35:06.931699   68536 kubeadm.go:740] duration metric: took 2.273506864s waiting for restarted kubelet to initialise ...
	I0719 19:35:06.931706   68536 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:35:06.941748   68536 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-tngnj" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:07.231890   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:09.732362   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:07.119022   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:07.619075   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:08.119512   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:08.618875   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:09.118565   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:09.619032   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:10.118774   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:10.618802   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:11.119561   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:11.618914   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:10.585455   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:10.585919   68019 main.go:141] libmachine: (no-preload-971041) DBG | unable to find current IP address of domain no-preload-971041 in network mk-no-preload-971041
	I0719 19:35:10.585946   68019 main.go:141] libmachine: (no-preload-971041) DBG | I0719 19:35:10.585867   70115 retry.go:31] will retry after 3.911827583s: waiting for machine to come up
	I0719 19:35:08.947740   68536 pod_ready.go:102] pod "coredns-7db6d8ff4d-tngnj" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:10.959978   68536 pod_ready.go:102] pod "coredns-7db6d8ff4d-tngnj" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:12.449422   68536 pod_ready.go:92] pod "coredns-7db6d8ff4d-tngnj" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:12.449445   68536 pod_ready.go:81] duration metric: took 5.507672716s for pod "coredns-7db6d8ff4d-tngnj" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:12.449459   68536 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:12.455485   68536 pod_ready.go:92] pod "etcd-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:12.455513   68536 pod_ready.go:81] duration metric: took 6.046494ms for pod "etcd-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:12.455524   68536 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:12.461238   68536 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:12.461259   68536 pod_ready.go:81] duration metric: took 5.727166ms for pod "kube-apiserver-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:12.461268   68536 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:12.230775   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:14.232074   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:16.232335   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:12.119049   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:12.619110   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:13.119295   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:13.619413   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:14.118909   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:14.619324   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:15.118964   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:15.618534   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:16.119209   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:16.619370   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:14.500920   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.501377   68019 main.go:141] libmachine: (no-preload-971041) Found IP for machine: 192.168.50.119
	I0719 19:35:14.501409   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has current primary IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.501420   68019 main.go:141] libmachine: (no-preload-971041) Reserving static IP address...
	I0719 19:35:14.501837   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "no-preload-971041", mac: "52:54:00:72:b8:bd", ip: "192.168.50.119"} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.501869   68019 main.go:141] libmachine: (no-preload-971041) Reserved static IP address: 192.168.50.119
	I0719 19:35:14.501888   68019 main.go:141] libmachine: (no-preload-971041) DBG | skip adding static IP to network mk-no-preload-971041 - found existing host DHCP lease matching {name: "no-preload-971041", mac: "52:54:00:72:b8:bd", ip: "192.168.50.119"}
	I0719 19:35:14.501907   68019 main.go:141] libmachine: (no-preload-971041) DBG | Getting to WaitForSSH function...
	I0719 19:35:14.501923   68019 main.go:141] libmachine: (no-preload-971041) Waiting for SSH to be available...
	I0719 19:35:14.504106   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.504417   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.504451   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.504563   68019 main.go:141] libmachine: (no-preload-971041) DBG | Using SSH client type: external
	I0719 19:35:14.504609   68019 main.go:141] libmachine: (no-preload-971041) DBG | Using SSH private key: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa (-rw-------)
	I0719 19:35:14.504650   68019 main.go:141] libmachine: (no-preload-971041) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.119 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 19:35:14.504691   68019 main.go:141] libmachine: (no-preload-971041) DBG | About to run SSH command:
	I0719 19:35:14.504710   68019 main.go:141] libmachine: (no-preload-971041) DBG | exit 0
	I0719 19:35:14.624083   68019 main.go:141] libmachine: (no-preload-971041) DBG | SSH cmd err, output: <nil>: 
	I0719 19:35:14.624404   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetConfigRaw
	I0719 19:35:14.625152   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetIP
	I0719 19:35:14.628074   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.628431   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.628460   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.628756   68019 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/config.json ...
	I0719 19:35:14.628949   68019 machine.go:94] provisionDockerMachine start ...
	I0719 19:35:14.628967   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:35:14.629217   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:14.631862   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.632252   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.632280   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.632505   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:14.632703   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:14.632878   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:14.633052   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:14.633256   68019 main.go:141] libmachine: Using SSH client type: native
	I0719 19:35:14.633484   68019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.119 22 <nil> <nil>}
	I0719 19:35:14.633500   68019 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 19:35:14.732493   68019 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 19:35:14.732533   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetMachineName
	I0719 19:35:14.732918   68019 buildroot.go:166] provisioning hostname "no-preload-971041"
	I0719 19:35:14.732943   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetMachineName
	I0719 19:35:14.733124   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:14.735603   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.735962   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.735998   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.736203   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:14.736432   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:14.736617   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:14.736847   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:14.737046   68019 main.go:141] libmachine: Using SSH client type: native
	I0719 19:35:14.737264   68019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.119 22 <nil> <nil>}
	I0719 19:35:14.737280   68019 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-971041 && echo "no-preload-971041" | sudo tee /etc/hostname
	I0719 19:35:14.850040   68019 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-971041
	
	I0719 19:35:14.850066   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:14.853200   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.853584   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.853618   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.854242   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:14.854505   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:14.854720   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:14.854889   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:14.855089   68019 main.go:141] libmachine: Using SSH client type: native
	I0719 19:35:14.855321   68019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.119 22 <nil> <nil>}
	I0719 19:35:14.855344   68019 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-971041' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-971041/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-971041' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 19:35:14.959910   68019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 19:35:14.959942   68019 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19307-14841/.minikube CaCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19307-14841/.minikube}
	I0719 19:35:14.959969   68019 buildroot.go:174] setting up certificates
	I0719 19:35:14.959985   68019 provision.go:84] configureAuth start
	I0719 19:35:14.959997   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetMachineName
	I0719 19:35:14.960252   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetIP
	I0719 19:35:14.963262   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.963701   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.963732   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.963997   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:14.966731   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.967155   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:14.967176   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:14.967343   68019 provision.go:143] copyHostCerts
	I0719 19:35:14.967418   68019 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem, removing ...
	I0719 19:35:14.967435   68019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem
	I0719 19:35:14.967506   68019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/ca.pem (1078 bytes)
	I0719 19:35:14.967642   68019 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem, removing ...
	I0719 19:35:14.967657   68019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem
	I0719 19:35:14.967696   68019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/cert.pem (1123 bytes)
	I0719 19:35:14.967783   68019 exec_runner.go:144] found /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem, removing ...
	I0719 19:35:14.967796   68019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem
	I0719 19:35:14.967826   68019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19307-14841/.minikube/key.pem (1679 bytes)
	I0719 19:35:14.967924   68019 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem org=jenkins.no-preload-971041 san=[127.0.0.1 192.168.50.119 localhost minikube no-preload-971041]
	I0719 19:35:15.077149   68019 provision.go:177] copyRemoteCerts
	I0719 19:35:15.077207   68019 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 19:35:15.077229   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:15.079901   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.080234   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.080265   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.080489   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:15.080694   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.080897   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:15.081069   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:35:15.161684   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 19:35:15.186551   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0719 19:35:15.208816   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 19:35:15.233238   68019 provision.go:87] duration metric: took 273.242567ms to configureAuth
	I0719 19:35:15.233264   68019 buildroot.go:189] setting minikube options for container-runtime
	I0719 19:35:15.233500   68019 config.go:182] Loaded profile config "no-preload-971041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 19:35:15.233593   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:15.236166   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.236575   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.236621   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.236816   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:15.237047   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.237204   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.237393   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:15.237538   68019 main.go:141] libmachine: Using SSH client type: native
	I0719 19:35:15.237727   68019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.119 22 <nil> <nil>}
	I0719 19:35:15.237752   68019 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 19:35:15.489927   68019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 19:35:15.489955   68019 machine.go:97] duration metric: took 860.992755ms to provisionDockerMachine
	I0719 19:35:15.489968   68019 start.go:293] postStartSetup for "no-preload-971041" (driver="kvm2")
	I0719 19:35:15.489983   68019 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 19:35:15.490011   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:35:15.490360   68019 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 19:35:15.490390   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:15.493498   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.493895   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.493921   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.494033   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:15.494190   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.494355   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:15.494522   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:35:15.575024   68019 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 19:35:15.579078   68019 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 19:35:15.579102   68019 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/addons for local assets ...
	I0719 19:35:15.579180   68019 filesync.go:126] Scanning /home/jenkins/minikube-integration/19307-14841/.minikube/files for local assets ...
	I0719 19:35:15.579269   68019 filesync.go:149] local asset: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem -> 220282.pem in /etc/ssl/certs
	I0719 19:35:15.579483   68019 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 19:35:15.589240   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:35:15.612071   68019 start.go:296] duration metric: took 122.087671ms for postStartSetup
	I0719 19:35:15.612115   68019 fix.go:56] duration metric: took 20.143554276s for fixHost
	I0719 19:35:15.612139   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:15.614679   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.614969   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.614994   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.615141   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:15.615362   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.615517   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.615645   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:15.615872   68019 main.go:141] libmachine: Using SSH client type: native
	I0719 19:35:15.616088   68019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.119 22 <nil> <nil>}
	I0719 19:35:15.616103   68019 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 19:35:15.712393   68019 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721417715.673203772
	
	I0719 19:35:15.712417   68019 fix.go:216] guest clock: 1721417715.673203772
	I0719 19:35:15.712430   68019 fix.go:229] Guest: 2024-07-19 19:35:15.673203772 +0000 UTC Remote: 2024-07-19 19:35:15.612120403 +0000 UTC m=+357.807790755 (delta=61.083369ms)
	I0719 19:35:15.712454   68019 fix.go:200] guest clock delta is within tolerance: 61.083369ms
	I0719 19:35:15.712468   68019 start.go:83] releasing machines lock for "no-preload-971041", held for 20.243943716s
	I0719 19:35:15.712492   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:35:15.712798   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetIP
	I0719 19:35:15.715651   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.716143   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.716167   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.716435   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:35:15.717039   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:35:15.717281   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:35:15.717381   68019 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 19:35:15.717428   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:15.717554   68019 ssh_runner.go:195] Run: cat /version.json
	I0719 19:35:15.717588   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:35:15.720120   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.720461   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.720489   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.720510   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.720731   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:15.720860   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:15.720891   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:15.720912   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.721167   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:35:15.721179   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:15.721365   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:35:15.721419   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:35:15.721559   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:35:15.721711   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:35:15.834816   68019 ssh_runner.go:195] Run: systemctl --version
	I0719 19:35:15.841452   68019 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 19:35:15.982250   68019 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 19:35:15.989060   68019 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 19:35:15.989135   68019 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 19:35:16.003901   68019 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 19:35:16.003926   68019 start.go:495] detecting cgroup driver to use...
	I0719 19:35:16.003995   68019 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 19:35:16.023680   68019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 19:35:16.038740   68019 docker.go:217] disabling cri-docker service (if available) ...
	I0719 19:35:16.038818   68019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 19:35:16.053907   68019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 19:35:16.067484   68019 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 19:35:16.186006   68019 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 19:35:16.339983   68019 docker.go:233] disabling docker service ...
	I0719 19:35:16.340044   68019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 19:35:16.353948   68019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 19:35:16.367479   68019 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 19:35:16.513192   68019 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 19:35:16.642277   68019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 19:35:16.656031   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 19:35:16.674529   68019 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0719 19:35:16.674606   68019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.684907   68019 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 19:35:16.684974   68019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.695542   68019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.706406   68019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.716477   68019 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 19:35:16.726989   68019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.736973   68019 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.753617   68019 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 19:35:16.764022   68019 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 19:35:16.772871   68019 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 19:35:16.772919   68019 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 19:35:16.785870   68019 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 19:35:16.794887   68019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:35:16.942160   68019 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 19:35:17.084983   68019 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 19:35:17.085048   68019 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 19:35:17.089799   68019 start.go:563] Will wait 60s for crictl version
	I0719 19:35:17.089866   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.093813   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 19:35:17.130312   68019 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 19:35:17.130390   68019 ssh_runner.go:195] Run: crio --version
	I0719 19:35:17.157666   68019 ssh_runner.go:195] Run: crio --version
	I0719 19:35:17.186672   68019 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0719 19:35:17.188084   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetIP
	I0719 19:35:17.190975   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:17.191340   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:35:17.191369   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:35:17.191561   68019 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0719 19:35:17.195580   68019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:35:17.207427   68019 kubeadm.go:883] updating cluster {Name:no-preload-971041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-971041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.119 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 19:35:17.207577   68019 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0719 19:35:17.207631   68019 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 19:35:17.243262   68019 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0719 19:35:17.243288   68019 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 19:35:17.243386   68019 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:17.243412   68019 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 19:35:17.243417   68019 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0719 19:35:17.243423   68019 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 19:35:17.243448   68019 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 19:35:17.243394   68019 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 19:35:17.243395   68019 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0719 19:35:17.243493   68019 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 19:35:17.245104   68019 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0719 19:35:17.245131   68019 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0719 19:35:17.245106   68019 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 19:35:17.245116   68019 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:17.245179   68019 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 19:35:17.245186   68019 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 19:35:17.245375   68019 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 19:35:17.245546   68019 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 19:35:17.462701   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 19:35:17.480663   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0719 19:35:17.488510   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 19:35:17.491593   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 19:35:17.501505   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0719 19:35:17.502452   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 19:35:17.506436   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0719 19:35:17.522704   68019 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0719 19:35:17.522760   68019 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 19:35:17.522811   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.659364   68019 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0719 19:35:17.659399   68019 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 19:35:17.659429   68019 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0719 19:35:17.659451   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.659462   68019 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 19:35:17.659479   68019 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0719 19:35:17.659515   68019 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 19:35:17.659519   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.659550   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.659591   68019 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0719 19:35:17.659549   68019 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0719 19:35:17.659619   68019 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0719 19:35:17.659627   68019 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 19:35:17.659631   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 19:35:17.659647   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.659658   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:17.670058   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0719 19:35:17.670089   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0719 19:35:17.671989   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 19:35:17.672178   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 19:35:17.672292   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 19:35:17.738069   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0719 19:35:17.738169   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0719 19:35:17.789982   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0719 19:35:17.790070   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0719 19:35:17.790088   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0719 19:35:17.790163   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0719 19:35:17.790104   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0719 19:35:17.790187   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0719 19:35:17.791690   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0719 19:35:17.791731   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0719 19:35:17.791748   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0719 19:35:17.791761   68019 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0719 19:35:17.791777   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0719 19:35:17.791798   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0719 19:35:17.791810   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0719 19:35:17.799893   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0719 19:35:17.800102   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0719 19:35:17.800410   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0719 19:35:14.467021   68536 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:16.467879   68536 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:18.234926   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:20.733492   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:17.119366   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:17.619359   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:18.118510   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:18.618899   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:19.119102   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:19.619423   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:20.119089   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:20.618533   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:21.118809   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:21.618616   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:18.089182   68019 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:19.785577   68019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.993744788s)
	I0719 19:35:19.785607   68019 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.99381007s)
	I0719 19:35:19.785615   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0719 19:35:19.785628   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0719 19:35:19.785634   68019 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0719 19:35:19.785654   68019 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.993829277s)
	I0719 19:35:19.785664   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0719 19:35:19.785684   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0719 19:35:19.785741   68019 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.696527431s)
	I0719 19:35:19.785825   68019 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0719 19:35:19.785866   68019 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:19.785910   68019 ssh_runner.go:195] Run: which crictl
	I0719 19:35:19.790112   68019 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:35:22.178743   68019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.393030453s)
	I0719 19:35:22.178773   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0719 19:35:22.178783   68019 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0719 19:35:22.178800   68019 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.388665972s)
	I0719 19:35:22.178830   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0719 19:35:22.178848   68019 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0719 19:35:22.178920   68019 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0719 19:35:22.183029   68019 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0719 19:35:18.968128   68536 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:21.467026   68536 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:21.467056   68536 pod_ready.go:81] duration metric: took 9.005780978s for pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:21.467069   68536 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cdb66" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:21.472259   68536 pod_ready.go:92] pod "kube-proxy-cdb66" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:21.472282   68536 pod_ready.go:81] duration metric: took 5.205132ms for pod "kube-proxy-cdb66" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:21.472299   68536 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:21.476400   68536 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:21.476420   68536 pod_ready.go:81] duration metric: took 4.112093ms for pod "kube-scheduler-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:21.476431   68536 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:22.733565   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:25.232147   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:22.119363   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:22.618892   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:23.118639   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:23.619386   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:24.118543   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:24.618558   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:25.118612   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:25.619231   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:26.118601   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:26.618755   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:25.462621   68019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.283764696s)
	I0719 19:35:25.462658   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0719 19:35:25.462676   68019 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0719 19:35:25.462728   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0719 19:35:26.824398   68019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.361644989s)
	I0719 19:35:26.824432   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0719 19:35:26.824443   68019 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0719 19:35:26.824492   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0719 19:35:23.484653   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:25.984229   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:27.233184   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:29.732740   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:27.118683   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:27.618797   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:28.118832   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:28.618508   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:29.118591   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:29.618542   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:30.118979   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:30.619506   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:31.118721   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:31.619394   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:28.791678   68019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.967160098s)
	I0719 19:35:28.791704   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0719 19:35:28.791716   68019 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0719 19:35:28.791769   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0719 19:35:30.745676   68019 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.95388111s)
	I0719 19:35:30.745709   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0719 19:35:30.745721   68019 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0719 19:35:30.745769   68019 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0719 19:35:31.391036   68019 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19307-14841/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0719 19:35:31.391080   68019 cache_images.go:123] Successfully loaded all cached images
	I0719 19:35:31.391085   68019 cache_images.go:92] duration metric: took 14.147757536s to LoadCachedImages
	I0719 19:35:31.391098   68019 kubeadm.go:934] updating node { 192.168.50.119 8443 v1.31.0-beta.0 crio true true} ...
	I0719 19:35:31.391207   68019 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-971041 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.119
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-971041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 19:35:31.391282   68019 ssh_runner.go:195] Run: crio config
	I0719 19:35:31.434562   68019 cni.go:84] Creating CNI manager for ""
	I0719 19:35:31.434584   68019 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:35:31.434594   68019 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 19:35:31.434617   68019 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.119 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-971041 NodeName:no-preload-971041 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.119"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.119 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 19:35:31.434745   68019 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.119
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-971041"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.119
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.119"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 19:35:31.434803   68019 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0719 19:35:31.446259   68019 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 19:35:31.446337   68019 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 19:35:31.456298   68019 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0719 19:35:31.473270   68019 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0719 19:35:31.490353   68019 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0719 19:35:31.506996   68019 ssh_runner.go:195] Run: grep 192.168.50.119	control-plane.minikube.internal$ /etc/hosts
	I0719 19:35:31.510538   68019 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.119	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 19:35:31.522076   68019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:35:31.638008   68019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:35:31.654421   68019 certs.go:68] Setting up /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041 for IP: 192.168.50.119
	I0719 19:35:31.654443   68019 certs.go:194] generating shared ca certs ...
	I0719 19:35:31.654458   68019 certs.go:226] acquiring lock for ca certs: {Name:mk8941444a80e72b310c1e070843c9ed097d9f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:35:31.654637   68019 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key
	I0719 19:35:31.654703   68019 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key
	I0719 19:35:31.654732   68019 certs.go:256] generating profile certs ...
	I0719 19:35:31.654844   68019 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/client.key
	I0719 19:35:31.654929   68019 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/apiserver.key.77bb0cd9
	I0719 19:35:31.654978   68019 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/proxy-client.key
	I0719 19:35:31.655112   68019 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem (1338 bytes)
	W0719 19:35:31.655148   68019 certs.go:480] ignoring /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028_empty.pem, impossibly tiny 0 bytes
	I0719 19:35:31.655161   68019 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 19:35:31.655204   68019 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/ca.pem (1078 bytes)
	I0719 19:35:31.655234   68019 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/cert.pem (1123 bytes)
	I0719 19:35:31.655274   68019 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/certs/key.pem (1679 bytes)
	I0719 19:35:31.655325   68019 certs.go:484] found cert: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem (1708 bytes)
	I0719 19:35:31.656172   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 19:35:31.683885   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 19:35:31.714681   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 19:35:31.742470   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 19:35:31.770348   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0719 19:35:31.804982   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 19:35:31.830750   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 19:35:31.857826   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 19:35:31.880913   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/certs/22028.pem --> /usr/share/ca-certificates/22028.pem (1338 bytes)
	I0719 19:35:31.903475   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/ssl/certs/220282.pem --> /usr/share/ca-certificates/220282.pem (1708 bytes)
	I0719 19:35:31.925060   68019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 19:35:31.948414   68019 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 19:35:31.964203   68019 ssh_runner.go:195] Run: openssl version
	I0719 19:35:31.969774   68019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22028.pem && ln -fs /usr/share/ca-certificates/22028.pem /etc/ssl/certs/22028.pem"
	I0719 19:35:31.981478   68019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22028.pem
	I0719 19:35:31.986937   68019 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:26 /usr/share/ca-certificates/22028.pem
	I0719 19:35:31.987002   68019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22028.pem
	I0719 19:35:31.993471   68019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22028.pem /etc/ssl/certs/51391683.0"
	I0719 19:35:32.004689   68019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/220282.pem && ln -fs /usr/share/ca-certificates/220282.pem /etc/ssl/certs/220282.pem"
	I0719 19:35:32.015199   68019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/220282.pem
	I0719 19:35:32.019475   68019 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:26 /usr/share/ca-certificates/220282.pem
	I0719 19:35:32.019514   68019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/220282.pem
	I0719 19:35:32.024962   68019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/220282.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 19:35:32.034782   68019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 19:35:32.044867   68019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:35:32.048801   68019 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:15 /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:35:32.048849   68019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 19:35:32.054461   68019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 19:35:32.064884   68019 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 19:35:32.069053   68019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 19:35:32.074547   68019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 19:35:32.080411   68019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 19:35:32.085765   68019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 19:35:32.090933   68019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 19:35:32.096183   68019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 19:35:32.101377   68019 kubeadm.go:392] StartCluster: {Name:no-preload-971041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-971041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.119 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 19:35:32.101485   68019 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 19:35:32.101523   68019 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:35:32.139740   68019 cri.go:89] found id: ""
	I0719 19:35:32.139796   68019 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 19:35:32.149576   68019 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 19:35:32.149595   68019 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 19:35:32.149640   68019 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 19:35:32.159486   68019 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 19:35:32.160857   68019 kubeconfig.go:125] found "no-preload-971041" server: "https://192.168.50.119:8443"
	I0719 19:35:32.163357   68019 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 19:35:32.173671   68019 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.119
	I0719 19:35:32.173710   68019 kubeadm.go:1160] stopping kube-system containers ...
	I0719 19:35:32.173723   68019 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 19:35:32.173767   68019 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 19:35:32.207095   68019 cri.go:89] found id: ""
	I0719 19:35:32.207172   68019 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 19:35:32.224068   68019 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:35:32.233552   68019 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:35:32.233571   68019 kubeadm.go:157] found existing configuration files:
	
	I0719 19:35:32.233623   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:35:32.241920   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:35:32.241971   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:35:32.250594   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:35:32.258910   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:35:32.258961   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:35:32.273812   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:35:32.282220   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:35:32.282277   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:35:32.290961   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:35:32.299396   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:35:32.299498   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:35:32.308343   68019 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:35:32.317592   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:32.421587   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:28.482817   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:30.982904   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:32.983517   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:32.231730   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:34.232127   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:32.118495   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:32.618922   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:33.119055   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:33.619189   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:34.118644   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:34.618987   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:35.119225   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:35.619447   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:36.118778   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:36.619327   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:33.447745   68019 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.026128232s)
	I0719 19:35:33.447770   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:33.662089   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:33.725683   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:33.792637   68019 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:35:33.792728   68019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:34.292798   68019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:34.793080   68019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:34.807168   68019 api_server.go:72] duration metric: took 1.014530984s to wait for apiserver process to appear ...
	I0719 19:35:34.807196   68019 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:35:34.807224   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:35:34.807771   68019 api_server.go:269] stopped: https://192.168.50.119:8443/healthz: Get "https://192.168.50.119:8443/healthz": dial tcp 192.168.50.119:8443: connect: connection refused
	I0719 19:35:35.308340   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:35:37.476017   68019 api_server.go:279] https://192.168.50.119:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:35:37.476054   68019 api_server.go:103] status: https://192.168.50.119:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:35:37.476070   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:35:37.518367   68019 api_server.go:279] https://192.168.50.119:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 19:35:37.518394   68019 api_server.go:103] status: https://192.168.50.119:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 19:35:37.807881   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:35:37.817233   68019 api_server.go:279] https://192.168.50.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:35:37.817256   68019 api_server.go:103] status: https://192.168.50.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:35:34.984368   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:37.482412   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:38.307564   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:35:38.312884   68019 api_server.go:279] https://192.168.50.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 19:35:38.312920   68019 api_server.go:103] status: https://192.168.50.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 19:35:38.807444   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:35:38.811634   68019 api_server.go:279] https://192.168.50.119:8443/healthz returned 200:
	ok
	I0719 19:35:38.818446   68019 api_server.go:141] control plane version: v1.31.0-beta.0
	I0719 19:35:38.818473   68019 api_server.go:131] duration metric: took 4.011270764s to wait for apiserver health ...
	I0719 19:35:38.818484   68019 cni.go:84] Creating CNI manager for ""
	I0719 19:35:38.818493   68019 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:35:38.820420   68019 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 19:35:36.733194   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:38.734341   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:41.232296   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:37.118504   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:37.619038   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:38.118793   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:38.619025   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:39.118668   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:39.619156   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:40.118595   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:40.618638   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:41.119418   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:41.618806   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:38.821795   68019 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 19:35:38.833825   68019 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 19:35:38.855799   68019 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:35:38.866571   68019 system_pods.go:59] 8 kube-system pods found
	I0719 19:35:38.866614   68019 system_pods.go:61] "coredns-5cfdc65f69-dzgdb" [1aca7daa-f2b9-41f9-ab4f-f6dcdf5a5655] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 19:35:38.866623   68019 system_pods.go:61] "etcd-no-preload-971041" [a2001218-3485-47a4-b7ba-bc5c41227b84] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 19:35:38.866632   68019 system_pods.go:61] "kube-apiserver-no-preload-971041" [f1a82c83-f8fc-44da-ba9c-558422a050cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 19:35:38.866641   68019 system_pods.go:61] "kube-controller-manager-no-preload-971041" [d760c600-3d53-436f-80b1-43f97c8a1d97] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 19:35:38.866649   68019 system_pods.go:61] "kube-proxy-xnwc9" [f709e007-bc70-4aed-a66c-c7a8218df4c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0719 19:35:38.866661   68019 system_pods.go:61] "kube-scheduler-no-preload-971041" [0751f461-931d-4ae9-bba2-393487903f77] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0719 19:35:38.866671   68019 system_pods.go:61] "metrics-server-78fcd8795b-j78vm" [dd7ef76a-38b2-4660-88ce-a4dbe6bdd43e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:35:38.866683   68019 system_pods.go:61] "storage-provisioner" [68333fba-f647-4ddc-81a1-c514426efb1c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 19:35:38.866690   68019 system_pods.go:74] duration metric: took 10.871899ms to wait for pod list to return data ...
	I0719 19:35:38.866700   68019 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:35:38.871059   68019 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:35:38.871085   68019 node_conditions.go:123] node cpu capacity is 2
	I0719 19:35:38.871096   68019 node_conditions.go:105] duration metric: took 4.388431ms to run NodePressure ...
	I0719 19:35:38.871117   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 19:35:39.202189   68019 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 19:35:39.208833   68019 kubeadm.go:739] kubelet initialised
	I0719 19:35:39.208852   68019 kubeadm.go:740] duration metric: took 6.63993ms waiting for restarted kubelet to initialise ...
	I0719 19:35:39.208860   68019 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:35:39.213105   68019 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-dzgdb" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:41.221061   68019 pod_ready.go:102] pod "coredns-5cfdc65f69-dzgdb" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:39.482629   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:41.484400   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:43.732393   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:46.230839   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:42.118791   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:42.619397   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:43.119214   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:43.619520   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:44.118776   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:44.618539   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:45.118556   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:45.619338   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:46.119336   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:46.618522   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:43.221724   68019 pod_ready.go:102] pod "coredns-5cfdc65f69-dzgdb" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:44.720005   68019 pod_ready.go:92] pod "coredns-5cfdc65f69-dzgdb" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:44.720027   68019 pod_ready.go:81] duration metric: took 5.506899416s for pod "coredns-5cfdc65f69-dzgdb" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:44.720036   68019 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:46.725895   68019 pod_ready.go:102] pod "etcd-no-preload-971041" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:43.983502   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:46.482945   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:48.232409   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:50.232711   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:47.119180   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:47.618740   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:48.118755   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:48.618715   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:49.118497   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:49.619062   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:50.118758   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:50.618820   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:51.119238   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:51.618777   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:48.726634   68019 pod_ready.go:102] pod "etcd-no-preload-971041" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:50.726838   68019 pod_ready.go:102] pod "etcd-no-preload-971041" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:51.727136   68019 pod_ready.go:92] pod "etcd-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:51.727157   68019 pod_ready.go:81] duration metric: took 7.007114402s for pod "etcd-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.727166   68019 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.731799   68019 pod_ready.go:92] pod "kube-apiserver-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:51.731826   68019 pod_ready.go:81] duration metric: took 4.653065ms for pod "kube-apiserver-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.731868   68019 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.736589   68019 pod_ready.go:92] pod "kube-controller-manager-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:51.736609   68019 pod_ready.go:81] duration metric: took 4.730944ms for pod "kube-controller-manager-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.736622   68019 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xnwc9" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.740944   68019 pod_ready.go:92] pod "kube-proxy-xnwc9" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:51.740969   68019 pod_ready.go:81] duration metric: took 4.339031ms for pod "kube-proxy-xnwc9" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.740990   68019 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.744978   68019 pod_ready.go:92] pod "kube-scheduler-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:35:51.744996   68019 pod_ready.go:81] duration metric: took 3.998956ms for pod "kube-scheduler-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:51.745004   68019 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace to be "Ready" ...
	I0719 19:35:48.983216   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:50.983267   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:52.234034   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:54.234538   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:52.119537   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:52.618848   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:53.118706   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:53.618985   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:54.119456   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:54.618507   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:55.118511   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:55.618721   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:56.118586   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:56.618540   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:53.751346   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:56.252649   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:53.482767   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:55.982603   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:56.732054   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:59.231910   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:57.118852   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:57.619296   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:58.118571   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:58.619133   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:59.119529   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:59.618552   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:00.118563   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:00.618818   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:01.118545   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:01.619292   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:35:58.751344   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:00.751687   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:35:58.482346   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:00.983025   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:02.983499   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:01.731466   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:03.732078   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:06.232464   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:02.118552   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:02.618759   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:03.119280   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:03.619453   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:04.119391   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:04.619016   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:05.118566   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:05.118653   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:05.157126   68995 cri.go:89] found id: ""
	I0719 19:36:05.157154   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.157163   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:05.157169   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:05.157218   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:05.190115   68995 cri.go:89] found id: ""
	I0719 19:36:05.190143   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.190153   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:05.190161   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:05.190218   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:05.222515   68995 cri.go:89] found id: ""
	I0719 19:36:05.222542   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.222550   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:05.222559   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:05.222609   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:05.264077   68995 cri.go:89] found id: ""
	I0719 19:36:05.264096   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.264103   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:05.264108   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:05.264180   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:05.305227   68995 cri.go:89] found id: ""
	I0719 19:36:05.305260   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.305271   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:05.305278   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:05.305365   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:05.346841   68995 cri.go:89] found id: ""
	I0719 19:36:05.346870   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.346883   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:05.346893   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:05.346948   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:05.386787   68995 cri.go:89] found id: ""
	I0719 19:36:05.386816   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.386826   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:05.386832   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:05.386888   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:05.425126   68995 cri.go:89] found id: ""
	I0719 19:36:05.425151   68995 logs.go:276] 0 containers: []
	W0719 19:36:05.425161   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:05.425172   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:05.425187   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:05.476260   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:05.476291   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:05.489614   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:05.489639   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:05.616060   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:05.616082   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:05.616095   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:05.680089   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:05.680121   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:03.251437   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:05.253289   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:07.751972   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:05.483082   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:07.983293   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:08.233421   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:10.731139   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:08.219734   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:08.234371   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:08.234437   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:08.279970   68995 cri.go:89] found id: ""
	I0719 19:36:08.279998   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.280009   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:08.280016   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:08.280068   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:08.315997   68995 cri.go:89] found id: ""
	I0719 19:36:08.316026   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.316037   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:08.316044   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:08.316108   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:08.350044   68995 cri.go:89] found id: ""
	I0719 19:36:08.350073   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.350082   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:08.350089   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:08.350149   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:08.407460   68995 cri.go:89] found id: ""
	I0719 19:36:08.407482   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.407494   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:08.407501   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:08.407559   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:08.442445   68995 cri.go:89] found id: ""
	I0719 19:36:08.442476   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.442486   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:08.442493   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:08.442564   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:08.478149   68995 cri.go:89] found id: ""
	I0719 19:36:08.478178   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.478193   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:08.478200   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:08.478261   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:08.523968   68995 cri.go:89] found id: ""
	I0719 19:36:08.523996   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.524003   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:08.524009   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:08.524083   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:08.558285   68995 cri.go:89] found id: ""
	I0719 19:36:08.558312   68995 logs.go:276] 0 containers: []
	W0719 19:36:08.558322   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:08.558332   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:08.558347   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:08.570518   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:08.570544   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:08.642837   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:08.642861   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:08.642874   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:08.716865   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:08.716905   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:08.760499   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:08.760530   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:11.312231   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:11.326565   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:11.326640   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:11.359787   68995 cri.go:89] found id: ""
	I0719 19:36:11.359813   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.359823   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:11.359829   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:11.359905   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:11.392541   68995 cri.go:89] found id: ""
	I0719 19:36:11.392565   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.392573   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:11.392578   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:11.392626   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:11.426313   68995 cri.go:89] found id: ""
	I0719 19:36:11.426340   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.426357   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:11.426363   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:11.426426   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:11.463138   68995 cri.go:89] found id: ""
	I0719 19:36:11.463164   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.463174   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:11.463181   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:11.463242   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:11.496506   68995 cri.go:89] found id: ""
	I0719 19:36:11.496531   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.496542   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:11.496549   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:11.496610   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:11.533917   68995 cri.go:89] found id: ""
	I0719 19:36:11.533947   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.533960   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:11.533969   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:11.534048   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:11.567245   68995 cri.go:89] found id: ""
	I0719 19:36:11.567268   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.567277   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:11.567283   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:11.567332   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:11.609340   68995 cri.go:89] found id: ""
	I0719 19:36:11.609365   68995 logs.go:276] 0 containers: []
	W0719 19:36:11.609376   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:11.609386   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:11.609400   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:11.659918   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:11.659953   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:11.673338   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:11.673371   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:11.745829   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:11.745854   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:11.745868   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:11.816860   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:11.816897   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:10.251374   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:12.252231   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:09.983472   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:12.482718   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:12.732496   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:15.231796   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:14.357653   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:14.371820   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:14.371900   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:14.408447   68995 cri.go:89] found id: ""
	I0719 19:36:14.408478   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.408488   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:14.408512   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:14.408583   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:14.443253   68995 cri.go:89] found id: ""
	I0719 19:36:14.443283   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.443319   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:14.443329   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:14.443403   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:14.476705   68995 cri.go:89] found id: ""
	I0719 19:36:14.476730   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.476739   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:14.476746   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:14.476806   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:14.508251   68995 cri.go:89] found id: ""
	I0719 19:36:14.508276   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.508286   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:14.508294   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:14.508365   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:14.540822   68995 cri.go:89] found id: ""
	I0719 19:36:14.540850   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.540860   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:14.540868   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:14.540930   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:14.574264   68995 cri.go:89] found id: ""
	I0719 19:36:14.574294   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.574320   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:14.574329   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:14.574409   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:14.606520   68995 cri.go:89] found id: ""
	I0719 19:36:14.606553   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.606565   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:14.606573   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:14.606646   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:14.640693   68995 cri.go:89] found id: ""
	I0719 19:36:14.640722   68995 logs.go:276] 0 containers: []
	W0719 19:36:14.640731   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:14.640743   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:14.640757   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:14.697534   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:14.697575   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:14.710323   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:14.710365   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:14.781478   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:14.781501   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:14.781516   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:14.887498   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:14.887535   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:14.750464   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:16.751420   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:14.483133   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:16.982943   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:17.732205   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:19.733051   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:17.427726   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:17.441950   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:17.442041   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:17.475370   68995 cri.go:89] found id: ""
	I0719 19:36:17.475398   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.475408   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:17.475415   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:17.475474   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:17.508258   68995 cri.go:89] found id: ""
	I0719 19:36:17.508289   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.508307   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:17.508315   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:17.508380   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:17.542625   68995 cri.go:89] found id: ""
	I0719 19:36:17.542656   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.542668   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:17.542676   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:17.542750   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:17.583168   68995 cri.go:89] found id: ""
	I0719 19:36:17.583199   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.583210   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:17.583218   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:17.583280   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:17.619339   68995 cri.go:89] found id: ""
	I0719 19:36:17.619369   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.619380   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:17.619386   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:17.619438   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:17.653194   68995 cri.go:89] found id: ""
	I0719 19:36:17.653219   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.653227   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:17.653233   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:17.653289   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:17.689086   68995 cri.go:89] found id: ""
	I0719 19:36:17.689115   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.689123   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:17.689129   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:17.689245   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:17.723079   68995 cri.go:89] found id: ""
	I0719 19:36:17.723108   68995 logs.go:276] 0 containers: []
	W0719 19:36:17.723115   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:17.723124   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:17.723148   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:17.736805   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:17.736829   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:17.808197   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:17.808219   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:17.808251   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:17.893127   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:17.893173   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:17.936617   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:17.936653   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:20.489347   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:20.502806   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:20.502884   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:20.543791   68995 cri.go:89] found id: ""
	I0719 19:36:20.543822   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.543846   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:20.543855   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:20.543924   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:20.582638   68995 cri.go:89] found id: ""
	I0719 19:36:20.582666   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.582678   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:20.582685   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:20.582742   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:20.632150   68995 cri.go:89] found id: ""
	I0719 19:36:20.632179   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.632192   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:20.632200   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:20.632265   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:20.665934   68995 cri.go:89] found id: ""
	I0719 19:36:20.665963   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.665974   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:20.665981   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:20.666040   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:20.703637   68995 cri.go:89] found id: ""
	I0719 19:36:20.703668   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.703679   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:20.703686   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:20.703755   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:20.736467   68995 cri.go:89] found id: ""
	I0719 19:36:20.736496   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.736506   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:20.736514   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:20.736582   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:20.771313   68995 cri.go:89] found id: ""
	I0719 19:36:20.771337   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.771345   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:20.771351   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:20.771408   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:20.804915   68995 cri.go:89] found id: ""
	I0719 19:36:20.804938   68995 logs.go:276] 0 containers: []
	W0719 19:36:20.804944   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:20.804953   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:20.804964   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:20.880189   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:20.880229   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:20.919653   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:20.919682   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:20.968588   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:20.968626   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:20.983781   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:20.983810   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:21.050261   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:18.752612   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:21.251202   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:19.483771   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:21.983390   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:22.232680   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:24.731208   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:23.550679   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:23.563483   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:23.563541   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:23.603919   68995 cri.go:89] found id: ""
	I0719 19:36:23.603950   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.603961   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:23.603969   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:23.604034   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:23.640354   68995 cri.go:89] found id: ""
	I0719 19:36:23.640377   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.640384   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:23.640390   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:23.640441   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:23.672609   68995 cri.go:89] found id: ""
	I0719 19:36:23.672644   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.672661   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:23.672668   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:23.672732   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:23.709547   68995 cri.go:89] found id: ""
	I0719 19:36:23.709577   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.709591   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:23.709598   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:23.709658   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:23.742474   68995 cri.go:89] found id: ""
	I0719 19:36:23.742498   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.742508   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:23.742514   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:23.742578   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:23.779339   68995 cri.go:89] found id: ""
	I0719 19:36:23.779372   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.779385   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:23.779393   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:23.779462   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:23.811984   68995 cri.go:89] found id: ""
	I0719 19:36:23.812010   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.812022   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:23.812031   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:23.812097   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:23.844692   68995 cri.go:89] found id: ""
	I0719 19:36:23.844716   68995 logs.go:276] 0 containers: []
	W0719 19:36:23.844724   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:23.844733   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:23.844746   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:23.899688   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:23.899721   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:23.915771   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:23.915799   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:23.986509   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:23.986529   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:23.986543   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:24.065621   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:24.065653   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:26.602809   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:26.615541   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:26.615616   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:26.647648   68995 cri.go:89] found id: ""
	I0719 19:36:26.647671   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.647680   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:26.647685   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:26.647742   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:26.682383   68995 cri.go:89] found id: ""
	I0719 19:36:26.682409   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.682417   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:26.682423   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:26.682468   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:26.715374   68995 cri.go:89] found id: ""
	I0719 19:36:26.715398   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.715405   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:26.715411   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:26.715482   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:26.748636   68995 cri.go:89] found id: ""
	I0719 19:36:26.748663   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.748673   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:26.748680   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:26.748742   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:26.783186   68995 cri.go:89] found id: ""
	I0719 19:36:26.783218   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.783229   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:26.783236   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:26.783299   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:26.813344   68995 cri.go:89] found id: ""
	I0719 19:36:26.813370   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.813380   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:26.813386   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:26.813434   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:26.845054   68995 cri.go:89] found id: ""
	I0719 19:36:26.845083   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.845093   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:26.845100   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:26.845166   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:26.878589   68995 cri.go:89] found id: ""
	I0719 19:36:26.878614   68995 logs.go:276] 0 containers: []
	W0719 19:36:26.878621   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:26.878629   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:26.878640   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:26.931251   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:26.931284   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:26.944193   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:26.944230   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 19:36:23.252710   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:25.257011   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:27.751677   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:24.482571   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:26.982654   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:26.732883   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:29.233481   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	W0719 19:36:27.011803   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:27.011826   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:27.011846   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:27.085331   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:27.085364   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:29.625633   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:29.639941   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:29.640002   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:29.673642   68995 cri.go:89] found id: ""
	I0719 19:36:29.673669   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.673678   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:29.673683   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:29.673744   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:29.710482   68995 cri.go:89] found id: ""
	I0719 19:36:29.710507   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.710515   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:29.710521   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:29.710571   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:29.746719   68995 cri.go:89] found id: ""
	I0719 19:36:29.746744   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.746754   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:29.746761   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:29.746823   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:29.783230   68995 cri.go:89] found id: ""
	I0719 19:36:29.783256   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.783264   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:29.783269   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:29.783318   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:29.817944   68995 cri.go:89] found id: ""
	I0719 19:36:29.817971   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.817980   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:29.817986   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:29.818049   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:29.851277   68995 cri.go:89] found id: ""
	I0719 19:36:29.851305   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.851315   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:29.851328   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:29.851387   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:29.884476   68995 cri.go:89] found id: ""
	I0719 19:36:29.884502   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.884510   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:29.884516   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:29.884566   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:29.916564   68995 cri.go:89] found id: ""
	I0719 19:36:29.916592   68995 logs.go:276] 0 containers: []
	W0719 19:36:29.916600   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:29.916608   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:29.916620   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:29.967970   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:29.968009   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:29.981767   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:29.981792   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:30.046914   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:30.046939   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:30.046953   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:30.125610   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:30.125644   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:29.753033   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:32.251239   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:29.483807   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:31.982677   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:31.732069   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:34.231495   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:36.232914   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:32.661765   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:32.676274   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:32.676354   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:32.712439   68995 cri.go:89] found id: ""
	I0719 19:36:32.712460   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.712468   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:32.712474   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:32.712534   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:32.745933   68995 cri.go:89] found id: ""
	I0719 19:36:32.745956   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.745965   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:32.745972   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:32.746035   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:32.778900   68995 cri.go:89] found id: ""
	I0719 19:36:32.778920   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.778927   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:32.778932   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:32.778988   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:32.809927   68995 cri.go:89] found id: ""
	I0719 19:36:32.809956   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.809966   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:32.809979   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:32.810038   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:32.840388   68995 cri.go:89] found id: ""
	I0719 19:36:32.840409   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.840417   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:32.840422   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:32.840472   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:32.873390   68995 cri.go:89] found id: ""
	I0719 19:36:32.873419   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.873428   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:32.873436   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:32.873499   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:32.909278   68995 cri.go:89] found id: ""
	I0719 19:36:32.909304   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.909311   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:32.909317   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:32.909374   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:32.941310   68995 cri.go:89] found id: ""
	I0719 19:36:32.941339   68995 logs.go:276] 0 containers: []
	W0719 19:36:32.941346   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:32.941358   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:32.941370   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:32.992848   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:32.992875   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:33.006272   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:33.006305   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:33.068323   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:33.068358   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:33.068372   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:33.148030   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:33.148065   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:35.684957   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:35.697753   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:35.697823   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:35.733470   68995 cri.go:89] found id: ""
	I0719 19:36:35.733492   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.733500   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:35.733508   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:35.733565   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:35.766009   68995 cri.go:89] found id: ""
	I0719 19:36:35.766029   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.766037   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:35.766047   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:35.766104   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:35.799205   68995 cri.go:89] found id: ""
	I0719 19:36:35.799228   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.799238   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:35.799245   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:35.799303   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:35.834102   68995 cri.go:89] found id: ""
	I0719 19:36:35.834129   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.834139   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:35.834147   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:35.834217   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:35.867545   68995 cri.go:89] found id: ""
	I0719 19:36:35.867578   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.867588   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:35.867597   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:35.867653   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:35.902804   68995 cri.go:89] found id: ""
	I0719 19:36:35.902833   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.902856   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:35.902862   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:35.902917   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:35.935798   68995 cri.go:89] found id: ""
	I0719 19:36:35.935826   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.935851   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:35.935858   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:35.935918   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:35.968992   68995 cri.go:89] found id: ""
	I0719 19:36:35.969026   68995 logs.go:276] 0 containers: []
	W0719 19:36:35.969037   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:35.969049   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:35.969068   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:36.054017   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:36.054050   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:36.094850   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:36.094875   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:36.145996   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:36.146030   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:36.159520   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:36.159549   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:36.227943   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:34.751556   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:36.754350   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:34.483433   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:36.982914   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:38.732952   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:41.231720   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:38.728958   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:38.742662   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:38.742727   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:38.779815   68995 cri.go:89] found id: ""
	I0719 19:36:38.779856   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.779867   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:38.779874   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:38.779938   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:38.814393   68995 cri.go:89] found id: ""
	I0719 19:36:38.814418   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.814428   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:38.814435   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:38.814496   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:38.848480   68995 cri.go:89] found id: ""
	I0719 19:36:38.848501   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.848508   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:38.848513   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:38.848565   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:38.883737   68995 cri.go:89] found id: ""
	I0719 19:36:38.883787   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.883796   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:38.883802   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:38.883887   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:38.921331   68995 cri.go:89] found id: ""
	I0719 19:36:38.921365   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.921376   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:38.921384   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:38.921432   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:38.957590   68995 cri.go:89] found id: ""
	I0719 19:36:38.957617   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.957625   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:38.957630   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:38.957689   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:38.992056   68995 cri.go:89] found id: ""
	I0719 19:36:38.992081   68995 logs.go:276] 0 containers: []
	W0719 19:36:38.992091   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:38.992099   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:38.992162   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:39.026137   68995 cri.go:89] found id: ""
	I0719 19:36:39.026165   68995 logs.go:276] 0 containers: []
	W0719 19:36:39.026173   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:39.026181   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:39.026191   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:39.104975   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:39.105016   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:39.149479   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:39.149511   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:39.200003   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:39.200033   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:39.213690   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:39.213720   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:39.283772   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:41.784814   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:41.798307   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:41.798388   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:41.835130   68995 cri.go:89] found id: ""
	I0719 19:36:41.835160   68995 logs.go:276] 0 containers: []
	W0719 19:36:41.835181   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:41.835190   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:41.835248   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:41.871655   68995 cri.go:89] found id: ""
	I0719 19:36:41.871686   68995 logs.go:276] 0 containers: []
	W0719 19:36:41.871696   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:41.871702   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:41.871748   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:41.905291   68995 cri.go:89] found id: ""
	I0719 19:36:41.905321   68995 logs.go:276] 0 containers: []
	W0719 19:36:41.905332   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:41.905338   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:41.905387   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:41.938786   68995 cri.go:89] found id: ""
	I0719 19:36:41.938820   68995 logs.go:276] 0 containers: []
	W0719 19:36:41.938831   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:41.938839   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:41.938900   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:41.973146   68995 cri.go:89] found id: ""
	I0719 19:36:41.973170   68995 logs.go:276] 0 containers: []
	W0719 19:36:41.973178   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:41.973183   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:41.973230   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:39.251794   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:41.251895   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:39.483191   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:41.983709   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:43.233443   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:45.731442   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:42.007654   68995 cri.go:89] found id: ""
	I0719 19:36:42.007692   68995 logs.go:276] 0 containers: []
	W0719 19:36:42.007703   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:42.007713   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:42.007771   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:42.043419   68995 cri.go:89] found id: ""
	I0719 19:36:42.043454   68995 logs.go:276] 0 containers: []
	W0719 19:36:42.043465   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:42.043472   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:42.043550   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:42.079058   68995 cri.go:89] found id: ""
	I0719 19:36:42.079082   68995 logs.go:276] 0 containers: []
	W0719 19:36:42.079093   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:42.079106   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:42.079119   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:42.133699   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:42.133731   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:42.149143   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:42.149178   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:42.219380   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:42.219476   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:42.219493   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:42.300052   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:42.300089   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:44.837865   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:44.851088   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:44.851160   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:44.887593   68995 cri.go:89] found id: ""
	I0719 19:36:44.887625   68995 logs.go:276] 0 containers: []
	W0719 19:36:44.887636   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:44.887644   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:44.887704   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:44.923487   68995 cri.go:89] found id: ""
	I0719 19:36:44.923518   68995 logs.go:276] 0 containers: []
	W0719 19:36:44.923528   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:44.923535   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:44.923588   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:44.958530   68995 cri.go:89] found id: ""
	I0719 19:36:44.958552   68995 logs.go:276] 0 containers: []
	W0719 19:36:44.958560   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:44.958565   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:44.958612   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:44.993148   68995 cri.go:89] found id: ""
	I0719 19:36:44.993174   68995 logs.go:276] 0 containers: []
	W0719 19:36:44.993184   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:44.993191   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:44.993261   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:45.024580   68995 cri.go:89] found id: ""
	I0719 19:36:45.024608   68995 logs.go:276] 0 containers: []
	W0719 19:36:45.024619   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:45.024627   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:45.024687   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:45.061929   68995 cri.go:89] found id: ""
	I0719 19:36:45.061954   68995 logs.go:276] 0 containers: []
	W0719 19:36:45.061965   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:45.061974   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:45.062029   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:45.093445   68995 cri.go:89] found id: ""
	I0719 19:36:45.093473   68995 logs.go:276] 0 containers: []
	W0719 19:36:45.093482   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:45.093490   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:45.093549   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:45.129804   68995 cri.go:89] found id: ""
	I0719 19:36:45.129838   68995 logs.go:276] 0 containers: []
	W0719 19:36:45.129848   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:45.129860   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:45.129874   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:45.184333   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:45.184373   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:45.197516   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:45.197545   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:45.267374   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:45.267394   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:45.267411   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:45.343359   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:45.343400   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:43.751901   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:45.752529   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:44.482460   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:46.483629   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:48.231760   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:50.232025   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:47.880247   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:47.892644   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:47.892716   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:47.924838   68995 cri.go:89] found id: ""
	I0719 19:36:47.924866   68995 logs.go:276] 0 containers: []
	W0719 19:36:47.924874   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:47.924880   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:47.924936   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:47.956920   68995 cri.go:89] found id: ""
	I0719 19:36:47.956946   68995 logs.go:276] 0 containers: []
	W0719 19:36:47.956954   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:47.956960   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:47.957021   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:47.992589   68995 cri.go:89] found id: ""
	I0719 19:36:47.992614   68995 logs.go:276] 0 containers: []
	W0719 19:36:47.992622   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:47.992627   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:47.992681   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:48.028222   68995 cri.go:89] found id: ""
	I0719 19:36:48.028251   68995 logs.go:276] 0 containers: []
	W0719 19:36:48.028262   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:48.028270   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:48.028343   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:48.062122   68995 cri.go:89] found id: ""
	I0719 19:36:48.062144   68995 logs.go:276] 0 containers: []
	W0719 19:36:48.062152   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:48.062157   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:48.062202   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:48.095189   68995 cri.go:89] found id: ""
	I0719 19:36:48.095212   68995 logs.go:276] 0 containers: []
	W0719 19:36:48.095220   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:48.095226   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:48.095273   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:48.127726   68995 cri.go:89] found id: ""
	I0719 19:36:48.127756   68995 logs.go:276] 0 containers: []
	W0719 19:36:48.127764   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:48.127769   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:48.127820   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:48.160445   68995 cri.go:89] found id: ""
	I0719 19:36:48.160473   68995 logs.go:276] 0 containers: []
	W0719 19:36:48.160481   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:48.160490   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:48.160501   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:48.214196   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:48.214228   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:48.227942   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:48.227967   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:48.302651   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:48.302671   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:48.302684   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:48.382283   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:48.382325   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:50.921030   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:50.934783   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:50.934859   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:50.970236   68995 cri.go:89] found id: ""
	I0719 19:36:50.970268   68995 logs.go:276] 0 containers: []
	W0719 19:36:50.970282   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:50.970290   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:50.970349   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:51.003636   68995 cri.go:89] found id: ""
	I0719 19:36:51.003662   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.003669   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:51.003675   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:51.003735   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:51.035390   68995 cri.go:89] found id: ""
	I0719 19:36:51.035417   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.035425   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:51.035432   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:51.035496   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:51.069118   68995 cri.go:89] found id: ""
	I0719 19:36:51.069140   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.069148   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:51.069154   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:51.069207   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:51.102433   68995 cri.go:89] found id: ""
	I0719 19:36:51.102465   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.102476   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:51.102483   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:51.102547   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:51.135711   68995 cri.go:89] found id: ""
	I0719 19:36:51.135743   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.135755   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:51.135764   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:51.135856   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:51.168041   68995 cri.go:89] found id: ""
	I0719 19:36:51.168073   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.168083   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:51.168091   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:51.168153   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:51.205637   68995 cri.go:89] found id: ""
	I0719 19:36:51.205667   68995 logs.go:276] 0 containers: []
	W0719 19:36:51.205674   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:51.205683   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:51.205696   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:51.245168   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:51.245197   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:51.295810   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:51.295863   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:51.308791   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:51.308813   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:51.385214   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:51.385239   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:51.385254   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:48.251909   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:50.750650   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:52.751426   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:48.983311   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:50.984588   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:52.232203   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:54.731934   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:53.959019   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:53.971044   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:53.971118   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:54.004971   68995 cri.go:89] found id: ""
	I0719 19:36:54.004996   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.005007   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:54.005014   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:54.005074   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:54.035794   68995 cri.go:89] found id: ""
	I0719 19:36:54.035820   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.035828   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:54.035849   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:54.035909   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:54.067783   68995 cri.go:89] found id: ""
	I0719 19:36:54.067811   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.067821   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:54.067828   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:54.067905   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:54.107403   68995 cri.go:89] found id: ""
	I0719 19:36:54.107424   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.107432   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:54.107437   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:54.107486   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:54.144411   68995 cri.go:89] found id: ""
	I0719 19:36:54.144434   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.144443   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:54.144448   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:54.144507   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:54.180416   68995 cri.go:89] found id: ""
	I0719 19:36:54.180444   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.180453   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:54.180459   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:54.180504   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:54.216444   68995 cri.go:89] found id: ""
	I0719 19:36:54.216476   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.216484   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:54.216489   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:54.216536   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:54.252652   68995 cri.go:89] found id: ""
	I0719 19:36:54.252672   68995 logs.go:276] 0 containers: []
	W0719 19:36:54.252679   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:54.252686   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:54.252697   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:54.265888   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:54.265920   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:54.337271   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:54.337300   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:54.337316   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:54.414882   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:54.414916   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:36:54.452363   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:54.452387   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:55.252673   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:57.752884   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:53.483020   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:55.982973   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:57.983073   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:56.732297   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:58.733339   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:01.233006   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:57.004491   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:36:57.017581   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:36:57.017662   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:36:57.051318   68995 cri.go:89] found id: ""
	I0719 19:36:57.051347   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.051355   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:36:57.051362   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:36:57.051414   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:36:57.091361   68995 cri.go:89] found id: ""
	I0719 19:36:57.091386   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.091394   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:36:57.091399   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:36:57.091454   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:36:57.125970   68995 cri.go:89] found id: ""
	I0719 19:36:57.125998   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.126009   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:36:57.126016   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:36:57.126078   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:36:57.162767   68995 cri.go:89] found id: ""
	I0719 19:36:57.162794   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.162804   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:36:57.162811   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:36:57.162868   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:36:57.199712   68995 cri.go:89] found id: ""
	I0719 19:36:57.199742   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.199763   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:36:57.199770   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:36:57.199882   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:36:57.234782   68995 cri.go:89] found id: ""
	I0719 19:36:57.234812   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.234823   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:36:57.234830   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:36:57.234889   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:36:57.269789   68995 cri.go:89] found id: ""
	I0719 19:36:57.269817   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.269827   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:36:57.269835   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:36:57.269895   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:36:57.303323   68995 cri.go:89] found id: ""
	I0719 19:36:57.303350   68995 logs.go:276] 0 containers: []
	W0719 19:36:57.303361   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:36:57.303371   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:36:57.303385   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:36:57.353434   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:36:57.353469   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:36:57.365982   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:36:57.366007   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:36:57.433767   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:36:57.433787   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:36:57.433801   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:36:57.514332   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:36:57.514372   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:00.053832   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:00.067665   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:00.067733   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:00.102502   68995 cri.go:89] found id: ""
	I0719 19:37:00.102529   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.102539   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:00.102547   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:00.102605   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:00.155051   68995 cri.go:89] found id: ""
	I0719 19:37:00.155088   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.155101   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:00.155109   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:00.155245   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:00.189002   68995 cri.go:89] found id: ""
	I0719 19:37:00.189032   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.189042   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:00.189049   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:00.189113   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:00.222057   68995 cri.go:89] found id: ""
	I0719 19:37:00.222083   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.222091   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:00.222097   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:00.222278   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:00.256815   68995 cri.go:89] found id: ""
	I0719 19:37:00.256839   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.256847   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:00.256853   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:00.256907   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:00.288710   68995 cri.go:89] found id: ""
	I0719 19:37:00.288742   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.288750   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:00.288756   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:00.288803   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:00.325994   68995 cri.go:89] found id: ""
	I0719 19:37:00.326019   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.326028   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:00.326034   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:00.326090   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:00.360401   68995 cri.go:89] found id: ""
	I0719 19:37:00.360434   68995 logs.go:276] 0 containers: []
	W0719 19:37:00.360441   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:00.360450   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:00.360462   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:00.373448   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:00.373479   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:00.443796   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:00.443812   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:00.443825   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:00.529738   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:00.529777   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:00.571340   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:00.571367   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:00.251762   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:02.750698   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:36:59.984278   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:02.482260   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:03.731540   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:05.731921   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:03.121961   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:03.134075   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:03.134472   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:03.168000   68995 cri.go:89] found id: ""
	I0719 19:37:03.168033   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.168044   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:03.168053   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:03.168118   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:03.200331   68995 cri.go:89] found id: ""
	I0719 19:37:03.200362   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.200369   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:03.200376   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:03.200437   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:03.232472   68995 cri.go:89] found id: ""
	I0719 19:37:03.232500   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.232510   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:03.232517   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:03.232567   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:03.265173   68995 cri.go:89] found id: ""
	I0719 19:37:03.265200   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.265208   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:03.265216   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:03.265268   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:03.298983   68995 cri.go:89] found id: ""
	I0719 19:37:03.299011   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.299022   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:03.299029   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:03.299081   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:03.337990   68995 cri.go:89] found id: ""
	I0719 19:37:03.338024   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.338038   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:03.338046   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:03.338111   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:03.371374   68995 cri.go:89] found id: ""
	I0719 19:37:03.371402   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.371410   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:03.371415   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:03.371472   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:03.403890   68995 cri.go:89] found id: ""
	I0719 19:37:03.403915   68995 logs.go:276] 0 containers: []
	W0719 19:37:03.403928   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:03.403939   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:03.403954   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:03.478826   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:03.478864   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:03.519208   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:03.519231   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:03.571603   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:03.571636   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:03.585154   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:03.585178   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:03.649156   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:06.149550   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:06.162598   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:06.162676   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:06.196635   68995 cri.go:89] found id: ""
	I0719 19:37:06.196681   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.196692   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:06.196701   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:06.196764   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:06.230855   68995 cri.go:89] found id: ""
	I0719 19:37:06.230882   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.230892   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:06.230899   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:06.230959   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:06.264715   68995 cri.go:89] found id: ""
	I0719 19:37:06.264754   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.264771   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:06.264779   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:06.264842   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:06.297725   68995 cri.go:89] found id: ""
	I0719 19:37:06.297751   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.297758   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:06.297765   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:06.297810   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:06.334943   68995 cri.go:89] found id: ""
	I0719 19:37:06.334970   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.334978   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:06.334984   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:06.335038   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:06.373681   68995 cri.go:89] found id: ""
	I0719 19:37:06.373716   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.373725   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:06.373730   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:06.373781   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:06.409528   68995 cri.go:89] found id: ""
	I0719 19:37:06.409554   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.409564   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:06.409609   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:06.409682   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:06.444636   68995 cri.go:89] found id: ""
	I0719 19:37:06.444671   68995 logs.go:276] 0 containers: []
	W0719 19:37:06.444680   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:06.444715   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:06.444734   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:06.499678   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:06.499714   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:06.514790   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:06.514821   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:06.604576   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:06.604597   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:06.604615   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:06.690956   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:06.690989   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:04.752636   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:07.251643   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:04.484460   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:06.982794   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:08.231684   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:10.233049   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:09.228994   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:09.243444   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:09.243506   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:09.277208   68995 cri.go:89] found id: ""
	I0719 19:37:09.277229   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.277237   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:09.277247   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:09.277296   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:09.309077   68995 cri.go:89] found id: ""
	I0719 19:37:09.309102   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.309110   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:09.309115   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:09.309168   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:09.341212   68995 cri.go:89] found id: ""
	I0719 19:37:09.341247   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.341258   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:09.341265   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:09.341332   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:09.376373   68995 cri.go:89] found id: ""
	I0719 19:37:09.376400   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.376410   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:09.376416   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:09.376475   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:09.409175   68995 cri.go:89] found id: ""
	I0719 19:37:09.409204   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.409215   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:09.409222   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:09.409277   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:09.441481   68995 cri.go:89] found id: ""
	I0719 19:37:09.441509   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.441518   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:09.441526   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:09.441588   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:09.472907   68995 cri.go:89] found id: ""
	I0719 19:37:09.472935   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.472946   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:09.472954   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:09.473008   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:09.507776   68995 cri.go:89] found id: ""
	I0719 19:37:09.507801   68995 logs.go:276] 0 containers: []
	W0719 19:37:09.507810   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:09.507818   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:09.507850   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:09.559873   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:09.559910   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:09.572650   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:09.572677   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:09.644352   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:09.644375   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:09.644387   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:09.721464   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:09.721504   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:09.252776   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:11.751354   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:08.983314   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:10.983634   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:12.732203   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:15.234561   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:12.264113   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:12.276702   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:12.276758   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:12.321359   68995 cri.go:89] found id: ""
	I0719 19:37:12.321386   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.321393   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:12.321400   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:12.321471   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:12.365402   68995 cri.go:89] found id: ""
	I0719 19:37:12.365433   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.365441   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:12.365447   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:12.365505   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:12.409431   68995 cri.go:89] found id: ""
	I0719 19:37:12.409459   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.409471   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:12.409478   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:12.409526   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:12.447628   68995 cri.go:89] found id: ""
	I0719 19:37:12.447656   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.447665   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:12.447671   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:12.447720   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:12.481760   68995 cri.go:89] found id: ""
	I0719 19:37:12.481796   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.481806   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:12.481813   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:12.481875   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:12.513430   68995 cri.go:89] found id: ""
	I0719 19:37:12.513460   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.513471   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:12.513477   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:12.513525   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:12.546140   68995 cri.go:89] found id: ""
	I0719 19:37:12.546171   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.546182   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:12.546190   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:12.546258   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:12.578741   68995 cri.go:89] found id: ""
	I0719 19:37:12.578764   68995 logs.go:276] 0 containers: []
	W0719 19:37:12.578774   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:12.578785   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:12.578800   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:12.647584   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:12.647610   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:12.647625   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:12.721989   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:12.722027   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:12.761141   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:12.761168   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:12.811283   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:12.811334   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:15.327447   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:15.339987   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:15.340053   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:15.373692   68995 cri.go:89] found id: ""
	I0719 19:37:15.373720   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.373729   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:15.373737   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:15.373794   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:15.405801   68995 cri.go:89] found id: ""
	I0719 19:37:15.405831   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.405842   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:15.405849   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:15.405915   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:15.439395   68995 cri.go:89] found id: ""
	I0719 19:37:15.439424   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.439436   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:15.439444   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:15.439513   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:15.475944   68995 cri.go:89] found id: ""
	I0719 19:37:15.475967   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.475975   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:15.475981   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:15.476029   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:15.509593   68995 cri.go:89] found id: ""
	I0719 19:37:15.509613   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.509627   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:15.509634   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:15.509695   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:15.548840   68995 cri.go:89] found id: ""
	I0719 19:37:15.548871   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.548881   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:15.548887   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:15.548948   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:15.581096   68995 cri.go:89] found id: ""
	I0719 19:37:15.581139   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.581155   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:15.581163   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:15.581223   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:15.617428   68995 cri.go:89] found id: ""
	I0719 19:37:15.617458   68995 logs.go:276] 0 containers: []
	W0719 19:37:15.617469   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:15.617479   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:15.617495   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:15.688196   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:15.688223   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:15.688239   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:15.770179   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:15.770216   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:15.807899   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:15.807924   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:15.858152   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:15.858186   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:13.751672   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:16.251465   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:13.482766   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:15.485709   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:17.982675   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:17.731547   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:19.732093   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:18.370857   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:18.383710   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:18.383781   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:18.417000   68995 cri.go:89] found id: ""
	I0719 19:37:18.417032   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.417044   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:18.417051   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:18.417149   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:18.455852   68995 cri.go:89] found id: ""
	I0719 19:37:18.455884   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.455900   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:18.455907   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:18.455970   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:18.491458   68995 cri.go:89] found id: ""
	I0719 19:37:18.491490   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.491501   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:18.491508   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:18.491573   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:18.525125   68995 cri.go:89] found id: ""
	I0719 19:37:18.525151   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.525158   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:18.525163   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:18.525210   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:18.560527   68995 cri.go:89] found id: ""
	I0719 19:37:18.560555   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.560566   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:18.560574   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:18.560624   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:18.595328   68995 cri.go:89] found id: ""
	I0719 19:37:18.595365   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.595378   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:18.595388   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:18.595445   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:18.627669   68995 cri.go:89] found id: ""
	I0719 19:37:18.627697   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.627705   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:18.627711   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:18.627765   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:18.660140   68995 cri.go:89] found id: ""
	I0719 19:37:18.660168   68995 logs.go:276] 0 containers: []
	W0719 19:37:18.660177   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:18.660191   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:18.660206   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:18.733599   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:18.733628   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:18.774893   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:18.774920   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:18.827635   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:18.827672   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:18.841519   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:18.841545   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:18.904744   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:21.405683   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:21.420145   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:21.420224   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:21.456434   68995 cri.go:89] found id: ""
	I0719 19:37:21.456465   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.456477   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:21.456485   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:21.456543   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:21.489839   68995 cri.go:89] found id: ""
	I0719 19:37:21.489867   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.489874   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:21.489893   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:21.489945   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:21.521492   68995 cri.go:89] found id: ""
	I0719 19:37:21.521523   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.521533   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:21.521540   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:21.521602   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:21.553974   68995 cri.go:89] found id: ""
	I0719 19:37:21.553998   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.554008   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:21.554015   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:21.554075   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:21.589000   68995 cri.go:89] found id: ""
	I0719 19:37:21.589030   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.589042   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:21.589050   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:21.589119   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:21.622617   68995 cri.go:89] found id: ""
	I0719 19:37:21.622640   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.622648   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:21.622653   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:21.622702   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:21.654149   68995 cri.go:89] found id: ""
	I0719 19:37:21.654171   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.654178   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:21.654183   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:21.654241   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:21.687587   68995 cri.go:89] found id: ""
	I0719 19:37:21.687616   68995 logs.go:276] 0 containers: []
	W0719 19:37:21.687628   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:21.687640   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:21.687656   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:21.699883   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:21.699910   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:21.769804   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:21.769825   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:21.769839   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:21.857167   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:21.857205   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:21.896025   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:21.896050   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:18.251893   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:20.751093   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:19.989216   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:22.483346   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:22.231129   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:24.231620   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:26.232522   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:24.444695   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:24.457239   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:24.457310   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:24.491968   68995 cri.go:89] found id: ""
	I0719 19:37:24.491990   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.491999   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:24.492003   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:24.492049   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:24.526039   68995 cri.go:89] found id: ""
	I0719 19:37:24.526061   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.526069   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:24.526074   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:24.526120   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:24.560153   68995 cri.go:89] found id: ""
	I0719 19:37:24.560184   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.560194   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:24.560201   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:24.560254   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:24.594331   68995 cri.go:89] found id: ""
	I0719 19:37:24.594359   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.594369   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:24.594375   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:24.594442   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:24.632389   68995 cri.go:89] found id: ""
	I0719 19:37:24.632416   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.632425   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:24.632430   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:24.632481   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:24.664448   68995 cri.go:89] found id: ""
	I0719 19:37:24.664473   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.664481   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:24.664487   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:24.664547   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:24.699510   68995 cri.go:89] found id: ""
	I0719 19:37:24.699533   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.699542   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:24.699547   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:24.699599   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:24.732624   68995 cri.go:89] found id: ""
	I0719 19:37:24.732646   68995 logs.go:276] 0 containers: []
	W0719 19:37:24.732655   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:24.732665   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:24.732688   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:24.800136   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:24.800161   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:24.800176   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:24.883652   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:24.883696   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:24.922027   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:24.922052   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:24.975191   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:24.975235   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:23.251310   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:25.251598   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:27.751985   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:24.484050   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:26.982359   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:28.237361   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:30.734572   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:27.489307   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:27.503441   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:27.503500   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:27.536099   68995 cri.go:89] found id: ""
	I0719 19:37:27.536126   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.536134   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:27.536140   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:27.536187   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:27.567032   68995 cri.go:89] found id: ""
	I0719 19:37:27.567060   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.567068   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:27.567073   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:27.567133   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:27.599396   68995 cri.go:89] found id: ""
	I0719 19:37:27.599425   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.599436   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:27.599444   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:27.599510   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:27.634718   68995 cri.go:89] found id: ""
	I0719 19:37:27.634746   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.634755   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:27.634760   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:27.634809   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:27.672731   68995 cri.go:89] found id: ""
	I0719 19:37:27.672766   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.672778   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:27.672787   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:27.672852   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:27.706705   68995 cri.go:89] found id: ""
	I0719 19:37:27.706734   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.706743   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:27.706751   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:27.706814   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:27.740520   68995 cri.go:89] found id: ""
	I0719 19:37:27.740548   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.740556   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:27.740561   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:27.740614   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:27.772629   68995 cri.go:89] found id: ""
	I0719 19:37:27.772653   68995 logs.go:276] 0 containers: []
	W0719 19:37:27.772660   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:27.772668   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:27.772680   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:27.826404   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:27.826432   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:27.839299   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:27.839331   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:27.903888   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:27.903908   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:27.903920   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:27.981384   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:27.981418   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:30.519503   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:30.532781   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:30.532867   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:30.566867   68995 cri.go:89] found id: ""
	I0719 19:37:30.566900   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.566911   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:30.566918   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:30.566979   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:30.599130   68995 cri.go:89] found id: ""
	I0719 19:37:30.599160   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.599171   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:30.599178   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:30.599241   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:30.632636   68995 cri.go:89] found id: ""
	I0719 19:37:30.632672   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.632685   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:30.632694   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:30.632767   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:30.665793   68995 cri.go:89] found id: ""
	I0719 19:37:30.665822   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.665837   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:30.665845   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:30.665908   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:30.699002   68995 cri.go:89] found id: ""
	I0719 19:37:30.699027   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.699034   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:30.699039   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:30.699098   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:30.733654   68995 cri.go:89] found id: ""
	I0719 19:37:30.733680   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.733690   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:30.733698   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:30.733757   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:30.766219   68995 cri.go:89] found id: ""
	I0719 19:37:30.766250   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.766260   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:30.766269   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:30.766344   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:30.800884   68995 cri.go:89] found id: ""
	I0719 19:37:30.800912   68995 logs.go:276] 0 containers: []
	W0719 19:37:30.800922   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:30.800931   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:30.800946   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:30.851079   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:30.851114   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:30.863829   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:30.863873   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:30.932194   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:30.932219   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:30.932231   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:31.012345   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:31.012377   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:30.250950   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:32.251737   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:28.982804   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:30.983656   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:32.736607   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:35.232724   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:33.549156   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:33.561253   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:33.561318   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:33.593443   68995 cri.go:89] found id: ""
	I0719 19:37:33.593469   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.593479   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:33.593487   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:33.593545   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:33.626350   68995 cri.go:89] found id: ""
	I0719 19:37:33.626371   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.626380   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:33.626385   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:33.626434   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:33.660751   68995 cri.go:89] found id: ""
	I0719 19:37:33.660789   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.660797   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:33.660803   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:33.660857   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:33.696342   68995 cri.go:89] found id: ""
	I0719 19:37:33.696380   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.696392   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:33.696399   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:33.696465   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:33.728815   68995 cri.go:89] found id: ""
	I0719 19:37:33.728841   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.728850   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:33.728860   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:33.728922   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:33.765055   68995 cri.go:89] found id: ""
	I0719 19:37:33.765088   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.765095   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:33.765101   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:33.765150   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:33.798600   68995 cri.go:89] found id: ""
	I0719 19:37:33.798627   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.798635   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:33.798640   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:33.798687   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:33.831037   68995 cri.go:89] found id: ""
	I0719 19:37:33.831065   68995 logs.go:276] 0 containers: []
	W0719 19:37:33.831075   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:33.831085   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:33.831100   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:33.880454   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:33.880485   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:33.894966   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:33.894996   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:33.964170   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:33.964200   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:33.964215   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:34.046534   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:34.046565   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:36.587779   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:36.601141   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:36.601211   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:36.634499   68995 cri.go:89] found id: ""
	I0719 19:37:36.634527   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.634536   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:36.634542   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:36.634606   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:36.666890   68995 cri.go:89] found id: ""
	I0719 19:37:36.666917   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.666925   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:36.666931   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:36.666986   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:36.698970   68995 cri.go:89] found id: ""
	I0719 19:37:36.699001   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.699016   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:36.699023   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:36.699078   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:36.731334   68995 cri.go:89] found id: ""
	I0719 19:37:36.731368   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.731379   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:36.731386   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:36.731446   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:36.764358   68995 cri.go:89] found id: ""
	I0719 19:37:36.764382   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.764394   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:36.764400   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:36.764455   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:36.798707   68995 cri.go:89] found id: ""
	I0719 19:37:36.798734   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.798742   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:36.798747   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:36.798797   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:36.831226   68995 cri.go:89] found id: ""
	I0719 19:37:36.831255   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.831266   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:36.831273   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:36.831331   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:36.863480   68995 cri.go:89] found id: ""
	I0719 19:37:36.863506   68995 logs.go:276] 0 containers: []
	W0719 19:37:36.863513   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:36.863522   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:36.863536   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:36.915869   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:36.915908   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:36.930750   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:36.930781   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 19:37:34.750463   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:36.751189   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:33.484028   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:35.984592   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:37.233215   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:39.732096   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	W0719 19:37:37.012674   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:37.012694   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:37.012708   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:37.091337   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:37.091375   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:39.630016   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:39.642979   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:39.643041   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:39.675984   68995 cri.go:89] found id: ""
	I0719 19:37:39.676013   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.676023   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:39.676031   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:39.676100   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:39.707035   68995 cri.go:89] found id: ""
	I0719 19:37:39.707061   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.707068   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:39.707073   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:39.707131   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:39.744471   68995 cri.go:89] found id: ""
	I0719 19:37:39.744496   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.744503   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:39.744508   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:39.744561   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:39.778135   68995 cri.go:89] found id: ""
	I0719 19:37:39.778162   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.778171   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:39.778179   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:39.778238   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:39.811690   68995 cri.go:89] found id: ""
	I0719 19:37:39.811717   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.811727   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:39.811735   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:39.811793   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:39.843697   68995 cri.go:89] found id: ""
	I0719 19:37:39.843728   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.843740   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:39.843751   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:39.843801   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:39.877406   68995 cri.go:89] found id: ""
	I0719 19:37:39.877436   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.877448   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:39.877455   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:39.877520   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:39.909759   68995 cri.go:89] found id: ""
	I0719 19:37:39.909783   68995 logs.go:276] 0 containers: []
	W0719 19:37:39.909790   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:39.909799   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:39.909809   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:39.958364   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:39.958393   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:39.973472   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:39.973512   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:40.040299   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:40.040323   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:40.040338   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:40.117663   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:40.117702   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:38.751436   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:40.751476   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:42.752002   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:38.483729   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:40.983938   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:42.232962   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:44.732151   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:42.657713   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:42.672030   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:42.672096   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:42.706163   68995 cri.go:89] found id: ""
	I0719 19:37:42.706193   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.706204   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:42.706212   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:42.706271   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:42.740026   68995 cri.go:89] found id: ""
	I0719 19:37:42.740053   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.740063   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:42.740070   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:42.740132   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:42.772413   68995 cri.go:89] found id: ""
	I0719 19:37:42.772436   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.772443   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:42.772448   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:42.772493   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:42.803898   68995 cri.go:89] found id: ""
	I0719 19:37:42.803925   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.803936   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:42.803943   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:42.804007   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:42.835062   68995 cri.go:89] found id: ""
	I0719 19:37:42.835091   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.835100   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:42.835108   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:42.835181   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:42.867382   68995 cri.go:89] found id: ""
	I0719 19:37:42.867411   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.867420   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:42.867427   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:42.867513   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:42.899461   68995 cri.go:89] found id: ""
	I0719 19:37:42.899483   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.899491   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:42.899496   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:42.899542   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:42.932789   68995 cri.go:89] found id: ""
	I0719 19:37:42.932818   68995 logs.go:276] 0 containers: []
	W0719 19:37:42.932827   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:42.932837   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:42.932854   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:42.945144   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:42.945174   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:43.013369   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:43.013393   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:43.013408   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:43.093227   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:43.093263   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:43.134429   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:43.134454   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:45.684772   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:45.697903   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:45.697958   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:45.731984   68995 cri.go:89] found id: ""
	I0719 19:37:45.732007   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.732015   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:45.732021   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:45.732080   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:45.767631   68995 cri.go:89] found id: ""
	I0719 19:37:45.767660   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.767670   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:45.767678   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:45.767739   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:45.800822   68995 cri.go:89] found id: ""
	I0719 19:37:45.800849   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.800860   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:45.800871   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:45.800928   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:45.834313   68995 cri.go:89] found id: ""
	I0719 19:37:45.834345   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.834354   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:45.834360   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:45.834417   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:45.868659   68995 cri.go:89] found id: ""
	I0719 19:37:45.868690   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.868701   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:45.868709   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:45.868780   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:45.900779   68995 cri.go:89] found id: ""
	I0719 19:37:45.900806   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.900816   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:45.900824   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:45.900886   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:45.932767   68995 cri.go:89] found id: ""
	I0719 19:37:45.932796   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.932806   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:45.932811   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:45.932872   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:45.967785   68995 cri.go:89] found id: ""
	I0719 19:37:45.967814   68995 logs.go:276] 0 containers: []
	W0719 19:37:45.967822   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:45.967839   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:45.967852   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:46.034598   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:46.034623   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:46.034642   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:46.124237   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:46.124286   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:46.164874   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:46.164904   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:46.216679   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:46.216713   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:45.251369   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:47.251447   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:43.483160   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:45.983286   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:47.983452   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:46.732579   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:49.232449   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:51.232677   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:48.730444   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:48.743209   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:48.743283   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:48.780940   68995 cri.go:89] found id: ""
	I0719 19:37:48.780974   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.780982   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:48.780987   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:48.781037   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:48.814186   68995 cri.go:89] found id: ""
	I0719 19:37:48.814213   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.814221   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:48.814230   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:48.814310   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:48.845013   68995 cri.go:89] found id: ""
	I0719 19:37:48.845042   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.845056   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:48.845065   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:48.845139   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:48.878777   68995 cri.go:89] found id: ""
	I0719 19:37:48.878799   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.878807   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:48.878812   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:48.878858   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:48.908926   68995 cri.go:89] found id: ""
	I0719 19:37:48.908954   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.908965   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:48.908972   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:48.909032   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:48.942442   68995 cri.go:89] found id: ""
	I0719 19:37:48.942473   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.942483   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:48.942491   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:48.942554   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:48.974213   68995 cri.go:89] found id: ""
	I0719 19:37:48.974240   68995 logs.go:276] 0 containers: []
	W0719 19:37:48.974249   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:48.974254   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:48.974308   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:49.010041   68995 cri.go:89] found id: ""
	I0719 19:37:49.010067   68995 logs.go:276] 0 containers: []
	W0719 19:37:49.010076   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:49.010085   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:49.010098   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:49.060387   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:49.060419   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:49.073748   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:49.073773   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:49.139397   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:49.139422   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:49.139439   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:49.216748   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:49.216792   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:51.759657   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:51.771457   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:51.771538   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:51.804841   68995 cri.go:89] found id: ""
	I0719 19:37:51.804870   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.804881   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:51.804886   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:51.804937   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:51.838235   68995 cri.go:89] found id: ""
	I0719 19:37:51.838264   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.838274   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:51.838282   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:51.838363   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:51.885664   68995 cri.go:89] found id: ""
	I0719 19:37:51.885689   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.885699   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:51.885706   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:51.885769   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:51.917883   68995 cri.go:89] found id: ""
	I0719 19:37:51.917910   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.917920   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:51.917927   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:51.917990   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:51.949856   68995 cri.go:89] found id: ""
	I0719 19:37:51.949889   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.949898   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:51.949905   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:51.949965   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:51.984016   68995 cri.go:89] found id: ""
	I0719 19:37:51.984060   68995 logs.go:276] 0 containers: []
	W0719 19:37:51.984072   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:51.984081   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:51.984134   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:49.752557   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:52.251462   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:50.483240   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:52.484284   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:53.731744   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:55.733040   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:52.020853   68995 cri.go:89] found id: ""
	I0719 19:37:52.020883   68995 logs.go:276] 0 containers: []
	W0719 19:37:52.020895   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:52.020903   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:52.020958   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:52.054285   68995 cri.go:89] found id: ""
	I0719 19:37:52.054310   68995 logs.go:276] 0 containers: []
	W0719 19:37:52.054321   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:52.054329   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:52.054341   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:52.091083   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:52.091108   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:52.143946   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:52.143981   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:52.158573   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:52.158600   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:52.232029   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:52.232052   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:52.232065   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:54.811805   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:54.825343   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:54.825404   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:54.859915   68995 cri.go:89] found id: ""
	I0719 19:37:54.859943   68995 logs.go:276] 0 containers: []
	W0719 19:37:54.859953   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:54.859961   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:54.860022   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:54.892199   68995 cri.go:89] found id: ""
	I0719 19:37:54.892234   68995 logs.go:276] 0 containers: []
	W0719 19:37:54.892245   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:54.892252   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:54.892316   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:54.925714   68995 cri.go:89] found id: ""
	I0719 19:37:54.925746   68995 logs.go:276] 0 containers: []
	W0719 19:37:54.925757   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:54.925765   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:54.925833   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:54.958677   68995 cri.go:89] found id: ""
	I0719 19:37:54.958717   68995 logs.go:276] 0 containers: []
	W0719 19:37:54.958729   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:54.958736   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:54.958803   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:54.994945   68995 cri.go:89] found id: ""
	I0719 19:37:54.994976   68995 logs.go:276] 0 containers: []
	W0719 19:37:54.994986   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:54.994994   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:54.995073   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:55.028149   68995 cri.go:89] found id: ""
	I0719 19:37:55.028175   68995 logs.go:276] 0 containers: []
	W0719 19:37:55.028185   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:55.028194   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:55.028255   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:55.062698   68995 cri.go:89] found id: ""
	I0719 19:37:55.062731   68995 logs.go:276] 0 containers: []
	W0719 19:37:55.062744   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:55.062753   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:55.062812   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:55.095864   68995 cri.go:89] found id: ""
	I0719 19:37:55.095892   68995 logs.go:276] 0 containers: []
	W0719 19:37:55.095903   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:55.095913   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:55.095928   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:37:55.152201   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:55.152235   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:55.165503   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:55.165528   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:55.232405   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:55.232429   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:55.232444   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:55.315444   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:55.315496   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:54.752510   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:57.251379   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:54.984369   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:57.482226   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:58.231419   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:00.733424   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:57.852984   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:37:57.865718   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:37:57.865784   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:37:57.898531   68995 cri.go:89] found id: ""
	I0719 19:37:57.898557   68995 logs.go:276] 0 containers: []
	W0719 19:37:57.898565   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:37:57.898570   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:37:57.898625   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:37:57.933746   68995 cri.go:89] found id: ""
	I0719 19:37:57.933774   68995 logs.go:276] 0 containers: []
	W0719 19:37:57.933784   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:37:57.933791   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:37:57.933851   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:37:57.974429   68995 cri.go:89] found id: ""
	I0719 19:37:57.974466   68995 logs.go:276] 0 containers: []
	W0719 19:37:57.974479   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:37:57.974488   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:37:57.974560   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:37:58.011209   68995 cri.go:89] found id: ""
	I0719 19:37:58.011233   68995 logs.go:276] 0 containers: []
	W0719 19:37:58.011243   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:37:58.011249   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:37:58.011313   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:37:58.049309   68995 cri.go:89] found id: ""
	I0719 19:37:58.049337   68995 logs.go:276] 0 containers: []
	W0719 19:37:58.049349   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:37:58.049357   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:37:58.049419   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:37:58.080873   68995 cri.go:89] found id: ""
	I0719 19:37:58.080902   68995 logs.go:276] 0 containers: []
	W0719 19:37:58.080913   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:37:58.080920   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:37:58.080979   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:37:58.113661   68995 cri.go:89] found id: ""
	I0719 19:37:58.113689   68995 logs.go:276] 0 containers: []
	W0719 19:37:58.113698   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:37:58.113706   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:37:58.113845   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:37:58.147431   68995 cri.go:89] found id: ""
	I0719 19:37:58.147459   68995 logs.go:276] 0 containers: []
	W0719 19:37:58.147469   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:37:58.147480   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:37:58.147495   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:37:58.160170   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:37:58.160195   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:37:58.222818   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:37:58.222858   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:37:58.222874   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:58.304895   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:37:58.304929   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:37:58.354462   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:37:58.354492   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:00.905865   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:00.918105   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:00.918180   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:00.952085   68995 cri.go:89] found id: ""
	I0719 19:38:00.952111   68995 logs.go:276] 0 containers: []
	W0719 19:38:00.952122   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:00.952130   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:00.952192   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:00.985966   68995 cri.go:89] found id: ""
	I0719 19:38:00.985990   68995 logs.go:276] 0 containers: []
	W0719 19:38:00.985999   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:00.986011   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:00.986059   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:01.021869   68995 cri.go:89] found id: ""
	I0719 19:38:01.021901   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.021910   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:01.021915   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:01.021961   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:01.055821   68995 cri.go:89] found id: ""
	I0719 19:38:01.055867   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.055878   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:01.055886   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:01.055950   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:01.090922   68995 cri.go:89] found id: ""
	I0719 19:38:01.090945   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.090952   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:01.090958   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:01.091007   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:01.122779   68995 cri.go:89] found id: ""
	I0719 19:38:01.122808   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.122824   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:01.122831   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:01.122896   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:01.159436   68995 cri.go:89] found id: ""
	I0719 19:38:01.159464   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.159473   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:01.159479   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:01.159531   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:01.191500   68995 cri.go:89] found id: ""
	I0719 19:38:01.191524   68995 logs.go:276] 0 containers: []
	W0719 19:38:01.191532   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:01.191541   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:01.191551   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:01.227143   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:01.227173   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:01.278336   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:01.278363   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:01.291658   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:01.291687   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:01.357320   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:01.357346   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:01.357362   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:37:59.251599   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:01.252565   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:37:59.482653   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:01.483865   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:03.232078   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:05.731641   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:03.938679   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:03.951235   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:03.951295   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:03.987844   68995 cri.go:89] found id: ""
	I0719 19:38:03.987875   68995 logs.go:276] 0 containers: []
	W0719 19:38:03.987887   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:03.987894   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:03.987959   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:04.030628   68995 cri.go:89] found id: ""
	I0719 19:38:04.030654   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.030665   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:04.030672   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:04.030744   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:04.077913   68995 cri.go:89] found id: ""
	I0719 19:38:04.077942   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.077950   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:04.077955   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:04.078022   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:04.130065   68995 cri.go:89] found id: ""
	I0719 19:38:04.130098   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.130109   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:04.130116   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:04.130181   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:04.162061   68995 cri.go:89] found id: ""
	I0719 19:38:04.162089   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.162097   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:04.162103   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:04.162163   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:04.194526   68995 cri.go:89] found id: ""
	I0719 19:38:04.194553   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.194560   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:04.194566   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:04.194613   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:04.226387   68995 cri.go:89] found id: ""
	I0719 19:38:04.226418   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.226429   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:04.226436   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:04.226569   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:04.263346   68995 cri.go:89] found id: ""
	I0719 19:38:04.263375   68995 logs.go:276] 0 containers: []
	W0719 19:38:04.263393   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:04.263404   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:04.263420   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:04.321110   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:04.321140   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:04.334294   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:04.334318   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:04.398556   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:04.398577   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:04.398592   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:04.485108   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:04.485139   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:03.751057   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:05.751335   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:03.983215   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:06.482374   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:08.231511   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:10.232368   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:07.022235   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:07.034595   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:07.034658   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:07.067454   68995 cri.go:89] found id: ""
	I0719 19:38:07.067482   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.067492   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:07.067499   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:07.067556   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:07.101154   68995 cri.go:89] found id: ""
	I0719 19:38:07.101183   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.101192   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:07.101206   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:07.101269   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:07.136084   68995 cri.go:89] found id: ""
	I0719 19:38:07.136110   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.136118   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:07.136123   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:07.136173   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:07.173278   68995 cri.go:89] found id: ""
	I0719 19:38:07.173304   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.173312   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:07.173317   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:07.173380   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:07.209680   68995 cri.go:89] found id: ""
	I0719 19:38:07.209707   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.209719   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:07.209727   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:07.209788   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:07.242425   68995 cri.go:89] found id: ""
	I0719 19:38:07.242453   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.242462   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:07.242471   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:07.242538   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:07.276112   68995 cri.go:89] found id: ""
	I0719 19:38:07.276137   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.276147   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:07.276153   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:07.276216   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:07.309460   68995 cri.go:89] found id: ""
	I0719 19:38:07.309489   68995 logs.go:276] 0 containers: []
	W0719 19:38:07.309500   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:07.309514   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:07.309528   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:07.361611   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:07.361650   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:07.375905   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:07.375942   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:07.441390   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:07.441412   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:07.441426   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:07.522023   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:07.522068   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:10.061774   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:10.074620   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:10.074705   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:10.107260   68995 cri.go:89] found id: ""
	I0719 19:38:10.107284   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.107293   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:10.107298   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:10.107345   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:10.142704   68995 cri.go:89] found id: ""
	I0719 19:38:10.142731   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.142738   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:10.142744   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:10.142793   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:10.174713   68995 cri.go:89] found id: ""
	I0719 19:38:10.174751   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.174762   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:10.174770   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:10.174837   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:10.205687   68995 cri.go:89] found id: ""
	I0719 19:38:10.205711   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.205719   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:10.205724   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:10.205782   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:10.238754   68995 cri.go:89] found id: ""
	I0719 19:38:10.238781   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.238789   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:10.238795   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:10.238844   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:10.271239   68995 cri.go:89] found id: ""
	I0719 19:38:10.271264   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.271274   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:10.271281   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:10.271380   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:10.306944   68995 cri.go:89] found id: ""
	I0719 19:38:10.306968   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.306975   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:10.306980   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:10.307030   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:10.336681   68995 cri.go:89] found id: ""
	I0719 19:38:10.336708   68995 logs.go:276] 0 containers: []
	W0719 19:38:10.336716   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:10.336725   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:10.336736   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:10.376368   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:10.376405   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:10.432676   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:10.432722   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:10.446932   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:10.446965   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:10.514285   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:10.514307   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:10.514335   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:08.251480   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:10.252276   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:12.751674   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:08.482753   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:10.483719   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:12.982523   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:12.732172   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:15.232040   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:13.097845   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:13.110781   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:13.110862   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:13.144732   68995 cri.go:89] found id: ""
	I0719 19:38:13.144760   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.144772   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:13.144779   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:13.144840   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:13.176313   68995 cri.go:89] found id: ""
	I0719 19:38:13.176356   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.176364   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:13.176372   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:13.176437   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:13.208219   68995 cri.go:89] found id: ""
	I0719 19:38:13.208249   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.208260   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:13.208267   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:13.208324   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:13.242977   68995 cri.go:89] found id: ""
	I0719 19:38:13.243004   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.243014   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:13.243022   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:13.243080   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:13.275356   68995 cri.go:89] found id: ""
	I0719 19:38:13.275382   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.275389   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:13.275396   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:13.275443   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:13.310244   68995 cri.go:89] found id: ""
	I0719 19:38:13.310275   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.310283   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:13.310290   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:13.310356   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:13.342626   68995 cri.go:89] found id: ""
	I0719 19:38:13.342652   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.342660   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:13.342667   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:13.342737   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:13.374943   68995 cri.go:89] found id: ""
	I0719 19:38:13.374965   68995 logs.go:276] 0 containers: []
	W0719 19:38:13.374973   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:13.374983   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:13.374993   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:13.429940   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:13.429975   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:13.443636   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:13.443677   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:13.514439   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:13.514468   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:13.514481   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:13.595337   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:13.595371   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:16.132302   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:16.145088   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:16.145148   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:16.176994   68995 cri.go:89] found id: ""
	I0719 19:38:16.177023   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.177034   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:16.177042   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:16.177103   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:16.208730   68995 cri.go:89] found id: ""
	I0719 19:38:16.208760   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.208771   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:16.208779   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:16.208843   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:16.244708   68995 cri.go:89] found id: ""
	I0719 19:38:16.244736   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.244747   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:16.244755   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:16.244816   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:16.279211   68995 cri.go:89] found id: ""
	I0719 19:38:16.279237   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.279245   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:16.279250   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:16.279328   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:16.312856   68995 cri.go:89] found id: ""
	I0719 19:38:16.312891   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.312902   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:16.312912   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:16.312980   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:16.347598   68995 cri.go:89] found id: ""
	I0719 19:38:16.347623   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.347630   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:16.347635   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:16.347695   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:16.381927   68995 cri.go:89] found id: ""
	I0719 19:38:16.381957   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.381966   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:16.381973   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:16.382035   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:16.415020   68995 cri.go:89] found id: ""
	I0719 19:38:16.415053   68995 logs.go:276] 0 containers: []
	W0719 19:38:16.415061   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:16.415071   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:16.415084   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:16.428268   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:16.428304   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:16.498869   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:16.498887   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:16.498902   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:16.581237   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:16.581271   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:16.616727   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:16.616755   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:15.251690   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:17.752442   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:15.483131   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:17.484337   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:17.232857   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:19.233157   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:19.172304   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:19.185206   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:19.185282   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:19.218156   68995 cri.go:89] found id: ""
	I0719 19:38:19.218189   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.218200   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:19.218207   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:19.218277   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:19.251173   68995 cri.go:89] found id: ""
	I0719 19:38:19.251201   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.251211   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:19.251219   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:19.251280   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:19.283537   68995 cri.go:89] found id: ""
	I0719 19:38:19.283566   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.283578   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:19.283586   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:19.283654   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:19.315995   68995 cri.go:89] found id: ""
	I0719 19:38:19.316026   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.316047   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:19.316054   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:19.316130   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:19.348917   68995 cri.go:89] found id: ""
	I0719 19:38:19.348940   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.348949   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:19.348954   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:19.349001   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:19.382174   68995 cri.go:89] found id: ""
	I0719 19:38:19.382206   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.382220   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:19.382228   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:19.382290   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:19.416178   68995 cri.go:89] found id: ""
	I0719 19:38:19.416208   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.416217   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:19.416222   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:19.416268   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:19.449333   68995 cri.go:89] found id: ""
	I0719 19:38:19.449364   68995 logs.go:276] 0 containers: []
	W0719 19:38:19.449375   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:19.449385   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:19.449397   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:19.503673   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:19.503708   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:19.517815   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:19.517847   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:19.583811   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:19.583849   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:19.583865   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:19.658742   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:19.658780   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:20.252451   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:22.752922   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:19.982854   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:21.984114   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:21.732342   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:24.231957   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:26.232440   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:22.197792   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:22.211464   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:22.211526   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:22.248860   68995 cri.go:89] found id: ""
	I0719 19:38:22.248891   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.248899   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:22.248905   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:22.248967   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:22.293173   68995 cri.go:89] found id: ""
	I0719 19:38:22.293198   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.293206   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:22.293212   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:22.293268   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:22.328826   68995 cri.go:89] found id: ""
	I0719 19:38:22.328857   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.328868   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:22.328875   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:22.328939   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:22.361560   68995 cri.go:89] found id: ""
	I0719 19:38:22.361590   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.361601   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:22.361608   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:22.361668   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:22.400899   68995 cri.go:89] found id: ""
	I0719 19:38:22.400927   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.400935   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:22.400941   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:22.400988   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:22.437159   68995 cri.go:89] found id: ""
	I0719 19:38:22.437185   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.437195   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:22.437202   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:22.437264   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:22.472424   68995 cri.go:89] found id: ""
	I0719 19:38:22.472454   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.472464   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:22.472472   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:22.472532   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:22.507649   68995 cri.go:89] found id: ""
	I0719 19:38:22.507678   68995 logs.go:276] 0 containers: []
	W0719 19:38:22.507688   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:22.507704   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:22.507718   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:22.559616   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:22.559649   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:22.573436   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:22.573468   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:22.643245   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:22.643279   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:22.643319   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:22.721532   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:22.721571   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:25.264435   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:25.277324   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:25.277400   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:25.308328   68995 cri.go:89] found id: ""
	I0719 19:38:25.308359   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.308370   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:25.308377   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:25.308429   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:25.344100   68995 cri.go:89] found id: ""
	I0719 19:38:25.344130   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.344140   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:25.344148   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:25.344207   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:25.379393   68995 cri.go:89] found id: ""
	I0719 19:38:25.379430   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.379440   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:25.379447   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:25.379495   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:25.417344   68995 cri.go:89] found id: ""
	I0719 19:38:25.417369   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.417377   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:25.417383   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:25.417467   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:25.450753   68995 cri.go:89] found id: ""
	I0719 19:38:25.450778   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.450787   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:25.450793   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:25.450845   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:25.483479   68995 cri.go:89] found id: ""
	I0719 19:38:25.483503   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.483512   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:25.483520   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:25.483585   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:25.514954   68995 cri.go:89] found id: ""
	I0719 19:38:25.514978   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.514986   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:25.514992   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:25.515040   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:25.547802   68995 cri.go:89] found id: ""
	I0719 19:38:25.547828   68995 logs.go:276] 0 containers: []
	W0719 19:38:25.547847   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:25.547858   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:25.547872   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:25.599643   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:25.599679   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:25.612587   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:25.612613   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:25.680267   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:25.680283   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:25.680294   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:25.764185   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:25.764215   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:25.253957   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:27.751614   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:24.482926   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:26.982157   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:28.731943   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:30.732225   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:28.309615   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:28.322513   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:28.322590   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:28.354692   68995 cri.go:89] found id: ""
	I0719 19:38:28.354728   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.354738   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:28.354746   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:28.354819   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:28.392144   68995 cri.go:89] found id: ""
	I0719 19:38:28.392170   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.392180   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:28.392187   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:28.392245   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:28.428196   68995 cri.go:89] found id: ""
	I0719 19:38:28.428226   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.428235   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:28.428243   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:28.428303   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:28.460353   68995 cri.go:89] found id: ""
	I0719 19:38:28.460385   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.460396   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:28.460403   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:28.460465   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:28.493143   68995 cri.go:89] found id: ""
	I0719 19:38:28.493168   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.493177   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:28.493184   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:28.493240   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:28.527379   68995 cri.go:89] found id: ""
	I0719 19:38:28.527404   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.527414   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:28.527422   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:28.527477   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:28.561506   68995 cri.go:89] found id: ""
	I0719 19:38:28.561531   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.561547   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:28.561555   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:28.561615   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:28.593832   68995 cri.go:89] found id: ""
	I0719 19:38:28.593859   68995 logs.go:276] 0 containers: []
	W0719 19:38:28.593868   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:28.593878   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:28.593894   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:28.606980   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:28.607011   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:28.680957   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:28.680984   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:28.680997   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:28.755705   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:28.755738   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:28.793362   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:28.793397   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:31.343185   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:31.355903   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:31.355980   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:31.390158   68995 cri.go:89] found id: ""
	I0719 19:38:31.390188   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.390198   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:31.390203   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:31.390256   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:31.422743   68995 cri.go:89] found id: ""
	I0719 19:38:31.422777   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.422788   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:31.422796   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:31.422858   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:31.457886   68995 cri.go:89] found id: ""
	I0719 19:38:31.457912   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.457922   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:31.457927   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:31.457987   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:31.493625   68995 cri.go:89] found id: ""
	I0719 19:38:31.493648   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.493655   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:31.493670   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:31.493724   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:31.525764   68995 cri.go:89] found id: ""
	I0719 19:38:31.525789   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.525797   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:31.525803   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:31.525857   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:31.558453   68995 cri.go:89] found id: ""
	I0719 19:38:31.558478   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.558489   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:31.558499   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:31.558565   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:31.597875   68995 cri.go:89] found id: ""
	I0719 19:38:31.597901   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.597911   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:31.597918   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:31.597979   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:31.631688   68995 cri.go:89] found id: ""
	I0719 19:38:31.631712   68995 logs.go:276] 0 containers: []
	W0719 19:38:31.631722   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:31.631730   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:31.631744   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:31.696991   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:31.697017   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:31.697032   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:31.782668   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:31.782712   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:31.822052   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:31.822091   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:31.876818   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:31.876857   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:30.250702   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:32.251104   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:29.482806   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:31.483225   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:33.232825   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:35.731586   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:34.390680   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:34.403662   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:34.403718   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:34.437627   68995 cri.go:89] found id: ""
	I0719 19:38:34.437653   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.437661   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:34.437668   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:34.437722   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:34.470906   68995 cri.go:89] found id: ""
	I0719 19:38:34.470935   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.470947   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:34.470955   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:34.471019   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:34.504424   68995 cri.go:89] found id: ""
	I0719 19:38:34.504452   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.504464   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:34.504472   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:34.504530   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:34.538610   68995 cri.go:89] found id: ""
	I0719 19:38:34.538638   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.538649   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:34.538657   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:34.538728   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:34.571473   68995 cri.go:89] found id: ""
	I0719 19:38:34.571500   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.571511   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:34.571518   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:34.571592   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:34.608465   68995 cri.go:89] found id: ""
	I0719 19:38:34.608499   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.608509   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:34.608516   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:34.608581   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:34.650110   68995 cri.go:89] found id: ""
	I0719 19:38:34.650140   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.650151   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:34.650158   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:34.650220   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:34.683386   68995 cri.go:89] found id: ""
	I0719 19:38:34.683409   68995 logs.go:276] 0 containers: []
	W0719 19:38:34.683416   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:34.683427   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:34.683443   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:34.736264   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:34.736294   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:34.751293   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:34.751338   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:34.818817   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:34.818838   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:34.818854   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:34.899383   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:34.899419   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:34.252210   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:36.754343   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:33.984014   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:35.984811   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:37.732623   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:40.231802   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:37.438455   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:37.451494   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:37.451563   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:37.485908   68995 cri.go:89] found id: ""
	I0719 19:38:37.485934   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.485943   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:37.485950   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:37.486006   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:37.519156   68995 cri.go:89] found id: ""
	I0719 19:38:37.519183   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.519193   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:37.519200   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:37.519257   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:37.561498   68995 cri.go:89] found id: ""
	I0719 19:38:37.561530   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.561541   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:37.561548   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:37.561608   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:37.598971   68995 cri.go:89] found id: ""
	I0719 19:38:37.599002   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.599012   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:37.599019   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:37.599075   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:37.641257   68995 cri.go:89] found id: ""
	I0719 19:38:37.641304   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.641315   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:37.641323   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:37.641376   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:37.676572   68995 cri.go:89] found id: ""
	I0719 19:38:37.676596   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.676605   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:37.676610   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:37.676671   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:37.712105   68995 cri.go:89] found id: ""
	I0719 19:38:37.712131   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.712139   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:37.712145   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:37.712201   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:37.747850   68995 cri.go:89] found id: ""
	I0719 19:38:37.747879   68995 logs.go:276] 0 containers: []
	W0719 19:38:37.747889   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:37.747899   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:37.747912   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:37.800904   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:37.800939   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:37.814264   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:37.814293   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:37.891872   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:37.891897   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:37.891913   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:37.967997   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:37.968033   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:40.511969   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:40.525191   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:40.525259   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:40.559619   68995 cri.go:89] found id: ""
	I0719 19:38:40.559651   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.559663   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:40.559671   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:40.559730   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:40.592775   68995 cri.go:89] found id: ""
	I0719 19:38:40.592804   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.592812   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:40.592817   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:40.592873   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:40.625524   68995 cri.go:89] found id: ""
	I0719 19:38:40.625546   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.625553   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:40.625558   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:40.625624   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:40.662117   68995 cri.go:89] found id: ""
	I0719 19:38:40.662151   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.662163   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:40.662171   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:40.662231   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:40.694835   68995 cri.go:89] found id: ""
	I0719 19:38:40.694868   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.694877   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:40.694885   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:40.694944   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:40.726879   68995 cri.go:89] found id: ""
	I0719 19:38:40.726905   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.726913   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:40.726919   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:40.726975   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:40.764663   68995 cri.go:89] found id: ""
	I0719 19:38:40.764688   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.764700   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:40.764707   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:40.764752   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:40.796684   68995 cri.go:89] found id: ""
	I0719 19:38:40.796711   68995 logs.go:276] 0 containers: []
	W0719 19:38:40.796718   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:40.796726   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:40.796736   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:40.834549   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:40.834588   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:40.886521   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:40.886561   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:40.900162   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:40.900199   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:40.969263   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:40.969285   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:40.969319   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:39.250513   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:41.251724   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:38.482887   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:40.484413   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:42.982430   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:42.232190   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:44.232617   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:43.545375   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:43.559091   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:43.559174   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:43.591901   68995 cri.go:89] found id: ""
	I0719 19:38:43.591929   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.591939   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:43.591946   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:43.592009   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:43.656005   68995 cri.go:89] found id: ""
	I0719 19:38:43.656035   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.656046   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:43.656055   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:43.656126   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:43.690049   68995 cri.go:89] found id: ""
	I0719 19:38:43.690073   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.690085   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:43.690092   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:43.690150   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:43.723449   68995 cri.go:89] found id: ""
	I0719 19:38:43.723475   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.723483   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:43.723490   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:43.723565   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:43.759683   68995 cri.go:89] found id: ""
	I0719 19:38:43.759710   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.759718   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:43.759724   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:43.759781   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:43.795127   68995 cri.go:89] found id: ""
	I0719 19:38:43.795154   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.795162   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:43.795179   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:43.795225   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:43.832493   68995 cri.go:89] found id: ""
	I0719 19:38:43.832522   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.832530   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:43.832536   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:43.832588   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:43.868651   68995 cri.go:89] found id: ""
	I0719 19:38:43.868686   68995 logs.go:276] 0 containers: []
	W0719 19:38:43.868697   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:43.868709   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:43.868724   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:43.908054   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:43.908096   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:43.960508   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:43.960546   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:43.973793   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:43.973817   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:44.046185   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:44.046210   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:44.046224   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:46.628410   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:46.640786   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:46.640853   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:46.672921   68995 cri.go:89] found id: ""
	I0719 19:38:46.672950   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.672960   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:46.672967   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:46.673030   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:46.706817   68995 cri.go:89] found id: ""
	I0719 19:38:46.706846   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.706857   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:46.706864   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:46.706922   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:46.741130   68995 cri.go:89] found id: ""
	I0719 19:38:46.741151   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.741158   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:46.741163   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:46.741207   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:46.778639   68995 cri.go:89] found id: ""
	I0719 19:38:46.778664   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.778671   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:46.778677   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:46.778724   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:46.812605   68995 cri.go:89] found id: ""
	I0719 19:38:46.812632   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.812648   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:46.812656   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:46.812709   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:46.848459   68995 cri.go:89] found id: ""
	I0719 19:38:46.848490   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.848499   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:46.848506   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:46.848554   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:46.883444   68995 cri.go:89] found id: ""
	I0719 19:38:46.883475   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.883486   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:46.883494   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:46.883553   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:46.916932   68995 cri.go:89] found id: ""
	I0719 19:38:46.916959   68995 logs.go:276] 0 containers: []
	W0719 19:38:46.916974   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:46.916983   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:46.916994   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:46.967243   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:46.967273   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:46.980786   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:46.980814   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 19:38:43.253266   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:45.752678   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:45.482055   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:47.482299   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:46.733205   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:49.232365   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	W0719 19:38:47.050672   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:47.050698   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:47.050713   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:47.136521   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:47.136557   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:49.673673   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:49.687071   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:49.687150   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:49.720268   68995 cri.go:89] found id: ""
	I0719 19:38:49.720296   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.720305   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:49.720313   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:49.720375   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:49.754485   68995 cri.go:89] found id: ""
	I0719 19:38:49.754512   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.754520   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:49.754527   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:49.754595   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:49.786533   68995 cri.go:89] found id: ""
	I0719 19:38:49.786557   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.786566   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:49.786573   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:49.786637   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:49.819719   68995 cri.go:89] found id: ""
	I0719 19:38:49.819749   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.819758   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:49.819764   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:49.819829   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:49.856826   68995 cri.go:89] found id: ""
	I0719 19:38:49.856859   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.856869   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:49.856876   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:49.856942   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:49.891737   68995 cri.go:89] found id: ""
	I0719 19:38:49.891763   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.891774   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:49.891781   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:49.891860   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:49.924660   68995 cri.go:89] found id: ""
	I0719 19:38:49.924690   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.924699   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:49.924704   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:49.924761   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:49.956896   68995 cri.go:89] found id: ""
	I0719 19:38:49.956925   68995 logs.go:276] 0 containers: []
	W0719 19:38:49.956936   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:49.956946   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:49.956959   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:50.012836   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:50.012867   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:50.028269   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:50.028292   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:50.128352   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:50.128380   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:50.128392   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:50.205971   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:50.206004   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:48.252120   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:50.751581   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:49.482679   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:51.984263   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:51.732882   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:54.232300   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:56.235372   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:52.742970   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:52.756222   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:52.756294   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:52.789556   68995 cri.go:89] found id: ""
	I0719 19:38:52.789585   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.789595   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:52.789601   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:52.789660   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:52.820745   68995 cri.go:89] found id: ""
	I0719 19:38:52.820790   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.820803   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:52.820811   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:52.820878   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:52.854667   68995 cri.go:89] found id: ""
	I0719 19:38:52.854698   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.854709   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:52.854716   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:52.854775   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:52.887125   68995 cri.go:89] found id: ""
	I0719 19:38:52.887152   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.887162   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:52.887167   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:52.887215   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:52.920824   68995 cri.go:89] found id: ""
	I0719 19:38:52.920861   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.920872   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:52.920878   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:52.920941   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:52.952819   68995 cri.go:89] found id: ""
	I0719 19:38:52.952848   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.952856   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:52.952861   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:52.952915   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:52.985508   68995 cri.go:89] found id: ""
	I0719 19:38:52.985535   68995 logs.go:276] 0 containers: []
	W0719 19:38:52.985545   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:52.985553   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:52.985617   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:53.018633   68995 cri.go:89] found id: ""
	I0719 19:38:53.018660   68995 logs.go:276] 0 containers: []
	W0719 19:38:53.018669   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:53.018678   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:53.018690   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:53.071496   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:53.071533   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:53.084685   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:53.084709   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:53.146579   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:53.146603   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:53.146617   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:53.227521   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:53.227552   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:55.768276   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:55.782991   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:55.783075   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:55.825184   68995 cri.go:89] found id: ""
	I0719 19:38:55.825206   68995 logs.go:276] 0 containers: []
	W0719 19:38:55.825214   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:55.825220   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:55.825282   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:55.872997   68995 cri.go:89] found id: ""
	I0719 19:38:55.873024   68995 logs.go:276] 0 containers: []
	W0719 19:38:55.873033   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:55.873041   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:55.873106   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:55.911478   68995 cri.go:89] found id: ""
	I0719 19:38:55.911510   68995 logs.go:276] 0 containers: []
	W0719 19:38:55.911521   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:55.911530   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:55.911654   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:55.943930   68995 cri.go:89] found id: ""
	I0719 19:38:55.943960   68995 logs.go:276] 0 containers: []
	W0719 19:38:55.943971   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:55.943978   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:55.944046   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:55.977743   68995 cri.go:89] found id: ""
	I0719 19:38:55.977771   68995 logs.go:276] 0 containers: []
	W0719 19:38:55.977781   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:55.977788   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:55.977855   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:56.014436   68995 cri.go:89] found id: ""
	I0719 19:38:56.014467   68995 logs.go:276] 0 containers: []
	W0719 19:38:56.014479   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:56.014486   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:56.014561   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:56.047209   68995 cri.go:89] found id: ""
	I0719 19:38:56.047244   68995 logs.go:276] 0 containers: []
	W0719 19:38:56.047255   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:56.047263   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:56.047348   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:56.079632   68995 cri.go:89] found id: ""
	I0719 19:38:56.079655   68995 logs.go:276] 0 containers: []
	W0719 19:38:56.079663   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:56.079671   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:56.079685   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:56.153319   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:56.153343   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:56.153356   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:56.248187   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:56.248228   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:56.291329   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:56.291354   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:38:56.346462   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:56.346499   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:53.251881   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:55.751678   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:54.484732   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:56.982643   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:58.733025   68617 pod_ready.go:102] pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:00.731479   68617 pod_ready.go:81] duration metric: took 4m0.005795927s for pod "metrics-server-569cc877fc-sfc85" in "kube-system" namespace to be "Ready" ...
	E0719 19:39:00.731502   68617 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0719 19:39:00.731509   68617 pod_ready.go:38] duration metric: took 4m4.040942986s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:39:00.731523   68617 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:39:00.731547   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:39:00.731600   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:39:00.786510   68617 cri.go:89] found id: "82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5"
	I0719 19:39:00.786531   68617 cri.go:89] found id: ""
	I0719 19:39:00.786539   68617 logs.go:276] 1 containers: [82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5]
	I0719 19:39:00.786595   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:00.791234   68617 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:39:00.791294   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:39:00.826465   68617 cri.go:89] found id: "46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c"
	I0719 19:39:00.826492   68617 cri.go:89] found id: ""
	I0719 19:39:00.826503   68617 logs.go:276] 1 containers: [46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c]
	I0719 19:39:00.826562   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:00.830418   68617 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:39:00.830481   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:39:00.865831   68617 cri.go:89] found id: "9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb"
	I0719 19:39:00.865851   68617 cri.go:89] found id: ""
	I0719 19:39:00.865859   68617 logs.go:276] 1 containers: [9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb]
	I0719 19:39:00.865903   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:00.870162   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:39:00.870209   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:39:00.912742   68617 cri.go:89] found id: "560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e"
	I0719 19:39:00.912766   68617 cri.go:89] found id: ""
	I0719 19:39:00.912775   68617 logs.go:276] 1 containers: [560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e]
	I0719 19:39:00.912829   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:00.916725   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:39:00.916777   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:39:00.951824   68617 cri.go:89] found id: "4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e"
	I0719 19:39:00.951861   68617 cri.go:89] found id: ""
	I0719 19:39:00.951871   68617 logs.go:276] 1 containers: [4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e]
	I0719 19:39:00.951920   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:00.955637   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:39:00.955702   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:39:00.990611   68617 cri.go:89] found id: "4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498"
	I0719 19:39:00.990637   68617 cri.go:89] found id: ""
	I0719 19:39:00.990645   68617 logs.go:276] 1 containers: [4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498]
	I0719 19:39:00.990690   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:00.994506   68617 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:39:00.994568   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:39:01.030959   68617 cri.go:89] found id: ""
	I0719 19:39:01.030991   68617 logs.go:276] 0 containers: []
	W0719 19:39:01.031001   68617 logs.go:278] No container was found matching "kindnet"
	I0719 19:39:01.031007   68617 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 19:39:01.031069   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 19:39:01.068897   68617 cri.go:89] found id: "aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040"
	I0719 19:39:01.068928   68617 cri.go:89] found id: "f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91"
	I0719 19:39:01.068934   68617 cri.go:89] found id: ""
	I0719 19:39:01.068945   68617 logs.go:276] 2 containers: [aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040 f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91]
	I0719 19:39:01.069006   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:01.073087   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:01.076976   68617 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:39:01.076994   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:58.861739   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:38:58.874774   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:38:58.874842   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:38:58.913102   68995 cri.go:89] found id: ""
	I0719 19:38:58.913130   68995 logs.go:276] 0 containers: []
	W0719 19:38:58.913138   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:38:58.913143   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:38:58.913201   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:58.949367   68995 cri.go:89] found id: ""
	I0719 19:38:58.949397   68995 logs.go:276] 0 containers: []
	W0719 19:38:58.949407   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:38:58.949415   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:38:58.949483   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:38:58.984133   68995 cri.go:89] found id: ""
	I0719 19:38:58.984152   68995 logs.go:276] 0 containers: []
	W0719 19:38:58.984158   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:38:58.984164   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:38:58.984224   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:38:59.016356   68995 cri.go:89] found id: ""
	I0719 19:38:59.016383   68995 logs.go:276] 0 containers: []
	W0719 19:38:59.016393   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:38:59.016401   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:38:59.016469   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:38:59.049981   68995 cri.go:89] found id: ""
	I0719 19:38:59.050009   68995 logs.go:276] 0 containers: []
	W0719 19:38:59.050017   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:38:59.050027   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:38:59.050087   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:38:59.082248   68995 cri.go:89] found id: ""
	I0719 19:38:59.082270   68995 logs.go:276] 0 containers: []
	W0719 19:38:59.082278   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:38:59.082284   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:38:59.082358   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:38:59.115980   68995 cri.go:89] found id: ""
	I0719 19:38:59.116011   68995 logs.go:276] 0 containers: []
	W0719 19:38:59.116021   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:38:59.116028   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:38:59.116102   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:38:59.148379   68995 cri.go:89] found id: ""
	I0719 19:38:59.148404   68995 logs.go:276] 0 containers: []
	W0719 19:38:59.148412   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:38:59.148420   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:38:59.148432   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:38:59.161066   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:38:59.161099   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:38:59.225628   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:38:59.225658   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:38:59.225669   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:38:59.311640   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:38:59.311682   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:38:59.360964   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:38:59.360997   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:39:01.915091   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:39:01.928928   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:39:01.928987   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:39:01.962436   68995 cri.go:89] found id: ""
	I0719 19:39:01.962472   68995 logs.go:276] 0 containers: []
	W0719 19:39:01.962483   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:39:01.962490   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:39:01.962559   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:38:58.251636   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:00.752163   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:38:58.983433   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:01.482798   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:01.626674   68617 logs.go:123] Gathering logs for kube-apiserver [82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5] ...
	I0719 19:39:01.626731   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5"
	I0719 19:39:01.670756   68617 logs.go:123] Gathering logs for coredns [9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb] ...
	I0719 19:39:01.670787   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb"
	I0719 19:39:01.706541   68617 logs.go:123] Gathering logs for kube-controller-manager [4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498] ...
	I0719 19:39:01.706578   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498"
	I0719 19:39:01.756534   68617 logs.go:123] Gathering logs for etcd [46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c] ...
	I0719 19:39:01.756570   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c"
	I0719 19:39:01.805577   68617 logs.go:123] Gathering logs for kube-scheduler [560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e] ...
	I0719 19:39:01.805613   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e"
	I0719 19:39:01.839258   68617 logs.go:123] Gathering logs for kube-proxy [4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e] ...
	I0719 19:39:01.839288   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e"
	I0719 19:39:01.878371   68617 logs.go:123] Gathering logs for storage-provisioner [aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040] ...
	I0719 19:39:01.878404   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040"
	I0719 19:39:01.917151   68617 logs.go:123] Gathering logs for storage-provisioner [f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91] ...
	I0719 19:39:01.917183   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91"
	I0719 19:39:01.956111   68617 logs.go:123] Gathering logs for kubelet ...
	I0719 19:39:01.956149   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:39:02.023029   68617 logs.go:123] Gathering logs for dmesg ...
	I0719 19:39:02.023081   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:39:02.040889   68617 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:39:02.040920   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 19:39:02.188194   68617 logs.go:123] Gathering logs for container status ...
	I0719 19:39:02.188226   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:39:04.739149   68617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:39:04.757462   68617 api_server.go:72] duration metric: took 4m15.770324413s to wait for apiserver process to appear ...
	I0719 19:39:04.757482   68617 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:39:04.757512   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:39:04.757557   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:39:04.796517   68617 cri.go:89] found id: "82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5"
	I0719 19:39:04.796543   68617 cri.go:89] found id: ""
	I0719 19:39:04.796553   68617 logs.go:276] 1 containers: [82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5]
	I0719 19:39:04.796610   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:04.801976   68617 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:39:04.802046   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:39:04.842403   68617 cri.go:89] found id: "46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c"
	I0719 19:39:04.842425   68617 cri.go:89] found id: ""
	I0719 19:39:04.842434   68617 logs.go:276] 1 containers: [46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c]
	I0719 19:39:04.842487   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:04.846620   68617 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:39:04.846694   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:39:04.885318   68617 cri.go:89] found id: "9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb"
	I0719 19:39:04.885343   68617 cri.go:89] found id: ""
	I0719 19:39:04.885353   68617 logs.go:276] 1 containers: [9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb]
	I0719 19:39:04.885411   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:04.889813   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:39:04.889883   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:39:04.928174   68617 cri.go:89] found id: "560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e"
	I0719 19:39:04.928201   68617 cri.go:89] found id: ""
	I0719 19:39:04.928211   68617 logs.go:276] 1 containers: [560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e]
	I0719 19:39:04.928278   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:04.932724   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:39:04.932807   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:39:04.970800   68617 cri.go:89] found id: "4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e"
	I0719 19:39:04.970823   68617 cri.go:89] found id: ""
	I0719 19:39:04.970830   68617 logs.go:276] 1 containers: [4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e]
	I0719 19:39:04.970882   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:04.975294   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:39:04.975369   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:39:05.012666   68617 cri.go:89] found id: "4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498"
	I0719 19:39:05.012690   68617 cri.go:89] found id: ""
	I0719 19:39:05.012699   68617 logs.go:276] 1 containers: [4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498]
	I0719 19:39:05.012766   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:05.016841   68617 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:39:05.016914   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:39:05.061117   68617 cri.go:89] found id: ""
	I0719 19:39:05.061141   68617 logs.go:276] 0 containers: []
	W0719 19:39:05.061151   68617 logs.go:278] No container was found matching "kindnet"
	I0719 19:39:05.061157   68617 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 19:39:05.061216   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 19:39:05.102934   68617 cri.go:89] found id: "aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040"
	I0719 19:39:05.102959   68617 cri.go:89] found id: "f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91"
	I0719 19:39:05.102965   68617 cri.go:89] found id: ""
	I0719 19:39:05.102974   68617 logs.go:276] 2 containers: [aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040 f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91]
	I0719 19:39:05.103029   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:05.107192   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:05.110970   68617 logs.go:123] Gathering logs for kube-scheduler [560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e] ...
	I0719 19:39:05.110990   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e"
	I0719 19:39:05.152862   68617 logs.go:123] Gathering logs for storage-provisioner [aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040] ...
	I0719 19:39:05.152900   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040"
	I0719 19:39:05.194080   68617 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:39:05.194110   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:39:05.657623   68617 logs.go:123] Gathering logs for container status ...
	I0719 19:39:05.657659   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:39:05.703773   68617 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:39:05.703802   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 19:39:05.810811   68617 logs.go:123] Gathering logs for etcd [46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c] ...
	I0719 19:39:05.810842   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c"
	I0719 19:39:05.859793   68617 logs.go:123] Gathering logs for kube-apiserver [82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5] ...
	I0719 19:39:05.859850   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5"
	I0719 19:39:05.911578   68617 logs.go:123] Gathering logs for coredns [9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb] ...
	I0719 19:39:05.911606   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb"
	I0719 19:39:05.947358   68617 logs.go:123] Gathering logs for kube-proxy [4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e] ...
	I0719 19:39:05.947384   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e"
	I0719 19:39:05.990731   68617 logs.go:123] Gathering logs for kube-controller-manager [4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498] ...
	I0719 19:39:05.990761   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498"
	I0719 19:39:06.041223   68617 logs.go:123] Gathering logs for storage-provisioner [f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91] ...
	I0719 19:39:06.041253   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91"
	I0719 19:39:06.076037   68617 logs.go:123] Gathering logs for kubelet ...
	I0719 19:39:06.076068   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:39:06.131177   68617 logs.go:123] Gathering logs for dmesg ...
	I0719 19:39:06.131213   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:39:02.003045   68995 cri.go:89] found id: ""
	I0719 19:39:02.003078   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.003090   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:39:02.003098   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:39:02.003160   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:39:02.041835   68995 cri.go:89] found id: ""
	I0719 19:39:02.041863   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.041880   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:39:02.041888   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:39:02.041954   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:39:02.080285   68995 cri.go:89] found id: ""
	I0719 19:39:02.080340   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.080353   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:39:02.080361   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:39:02.080431   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:39:02.116225   68995 cri.go:89] found id: ""
	I0719 19:39:02.116308   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.116323   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:39:02.116331   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:39:02.116390   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:39:02.154818   68995 cri.go:89] found id: ""
	I0719 19:39:02.154848   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.154860   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:39:02.154868   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:39:02.154931   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:39:02.190689   68995 cri.go:89] found id: ""
	I0719 19:39:02.190715   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.190727   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:39:02.190741   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:39:02.190810   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:39:02.224155   68995 cri.go:89] found id: ""
	I0719 19:39:02.224185   68995 logs.go:276] 0 containers: []
	W0719 19:39:02.224196   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:39:02.224207   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:39:02.224221   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:39:02.295623   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:39:02.295649   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:39:02.295665   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:39:02.378354   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:39:02.378390   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:39:02.416194   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:39:02.416222   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:39:02.470997   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:39:02.471039   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:39:04.985908   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:39:04.998485   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:39:04.998556   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:39:05.032776   68995 cri.go:89] found id: ""
	I0719 19:39:05.032801   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.032812   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:39:05.032819   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:39:05.032878   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:39:05.066650   68995 cri.go:89] found id: ""
	I0719 19:39:05.066677   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.066686   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:39:05.066693   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:39:05.066771   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:39:05.103689   68995 cri.go:89] found id: ""
	I0719 19:39:05.103712   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.103720   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:39:05.103728   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:39:05.103786   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:39:05.141739   68995 cri.go:89] found id: ""
	I0719 19:39:05.141767   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.141778   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:39:05.141785   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:39:05.141844   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:39:05.178535   68995 cri.go:89] found id: ""
	I0719 19:39:05.178566   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.178576   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:39:05.178582   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:39:05.178640   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:39:05.213876   68995 cri.go:89] found id: ""
	I0719 19:39:05.213904   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.213914   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:39:05.213921   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:39:05.213980   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:39:05.255039   68995 cri.go:89] found id: ""
	I0719 19:39:05.255063   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.255070   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:39:05.255076   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:39:05.255131   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:39:05.288508   68995 cri.go:89] found id: ""
	I0719 19:39:05.288536   68995 logs.go:276] 0 containers: []
	W0719 19:39:05.288547   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:39:05.288558   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:39:05.288572   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:39:05.301079   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:39:05.301109   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:39:05.368109   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 19:39:05.368128   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:39:05.368140   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:39:05.453326   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:39:05.453360   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:39:05.494780   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:39:05.494814   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:39:03.253971   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:05.751964   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:03.983599   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:06.482412   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:08.646117   68617 api_server.go:253] Checking apiserver healthz at https://192.168.61.107:8443/healthz ...
	I0719 19:39:08.651440   68617 api_server.go:279] https://192.168.61.107:8443/healthz returned 200:
	ok
	I0719 19:39:08.652517   68617 api_server.go:141] control plane version: v1.30.3
	I0719 19:39:08.652538   68617 api_server.go:131] duration metric: took 3.895047319s to wait for apiserver health ...
	I0719 19:39:08.652545   68617 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:39:08.652568   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:39:08.652629   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:39:08.688312   68617 cri.go:89] found id: "82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5"
	I0719 19:39:08.688356   68617 cri.go:89] found id: ""
	I0719 19:39:08.688366   68617 logs.go:276] 1 containers: [82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5]
	I0719 19:39:08.688424   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.692993   68617 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:39:08.693046   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:39:08.736900   68617 cri.go:89] found id: "46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c"
	I0719 19:39:08.736920   68617 cri.go:89] found id: ""
	I0719 19:39:08.736927   68617 logs.go:276] 1 containers: [46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c]
	I0719 19:39:08.736981   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.741409   68617 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:39:08.741473   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:39:08.780867   68617 cri.go:89] found id: "9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb"
	I0719 19:39:08.780885   68617 cri.go:89] found id: ""
	I0719 19:39:08.780892   68617 logs.go:276] 1 containers: [9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb]
	I0719 19:39:08.780935   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.785161   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:39:08.785233   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:39:08.818139   68617 cri.go:89] found id: "560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e"
	I0719 19:39:08.818161   68617 cri.go:89] found id: ""
	I0719 19:39:08.818170   68617 logs.go:276] 1 containers: [560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e]
	I0719 19:39:08.818225   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.822066   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:39:08.822116   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:39:08.855310   68617 cri.go:89] found id: "4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e"
	I0719 19:39:08.855333   68617 cri.go:89] found id: ""
	I0719 19:39:08.855344   68617 logs.go:276] 1 containers: [4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e]
	I0719 19:39:08.855402   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.859647   68617 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:39:08.859716   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:39:08.897948   68617 cri.go:89] found id: "4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498"
	I0719 19:39:08.897973   68617 cri.go:89] found id: ""
	I0719 19:39:08.897981   68617 logs.go:276] 1 containers: [4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498]
	I0719 19:39:08.898030   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.901861   68617 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:39:08.901909   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:39:08.940408   68617 cri.go:89] found id: ""
	I0719 19:39:08.940440   68617 logs.go:276] 0 containers: []
	W0719 19:39:08.940451   68617 logs.go:278] No container was found matching "kindnet"
	I0719 19:39:08.940458   68617 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 19:39:08.940516   68617 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 19:39:08.979077   68617 cri.go:89] found id: "aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040"
	I0719 19:39:08.979104   68617 cri.go:89] found id: "f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91"
	I0719 19:39:08.979109   68617 cri.go:89] found id: ""
	I0719 19:39:08.979118   68617 logs.go:276] 2 containers: [aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040 f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91]
	I0719 19:39:08.979185   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.983947   68617 ssh_runner.go:195] Run: which crictl
	I0719 19:39:08.987604   68617 logs.go:123] Gathering logs for coredns [9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb] ...
	I0719 19:39:08.987624   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c90b21a180b4037f206fdd8ec9c2a81dd211c2c1546d1a441b967296c6dd4cb"
	I0719 19:39:09.021803   68617 logs.go:123] Gathering logs for kube-proxy [4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e] ...
	I0719 19:39:09.021834   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b54e8db4eb450b2936a3fa5277030ea95a53505a5902121bcbb6032ad6a8c6e"
	I0719 19:39:09.064650   68617 logs.go:123] Gathering logs for kube-controller-manager [4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498] ...
	I0719 19:39:09.064677   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e88e6527a2ba2ee3feefcef291ec731a96118404094a09d03ceed8f881cd498"
	I0719 19:39:09.120779   68617 logs.go:123] Gathering logs for storage-provisioner [f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91] ...
	I0719 19:39:09.120834   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0710a5e9dce002cad2ca17952aab215da1e7a42326fb8b9c5ea0d139875eb91"
	I0719 19:39:09.155898   68617 logs.go:123] Gathering logs for kubelet ...
	I0719 19:39:09.155926   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:39:09.210461   68617 logs.go:123] Gathering logs for dmesg ...
	I0719 19:39:09.210490   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:39:09.224154   68617 logs.go:123] Gathering logs for kube-apiserver [82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5] ...
	I0719 19:39:09.224181   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82586fde0cf2510bc0b7564c4c723fc90089fae35dfb1a35639bf38c624a6ae5"
	I0719 19:39:09.266941   68617 logs.go:123] Gathering logs for storage-provisioner [aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040] ...
	I0719 19:39:09.266974   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aec5bd5dcc9a5b6f132b91d48bb1e3a18d7a7bb927f6795f9c6e4cc0b1d6b040"
	I0719 19:39:09.306982   68617 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:39:09.307011   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:39:09.689805   68617 logs.go:123] Gathering logs for container status ...
	I0719 19:39:09.689848   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:39:09.733995   68617 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:39:09.734034   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 19:39:09.842306   68617 logs.go:123] Gathering logs for etcd [46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c] ...
	I0719 19:39:09.842339   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46b13ce63930e7b7a040969f171da9b34ba4eea49b3daa9759352c98ec36d90c"
	I0719 19:39:09.889811   68617 logs.go:123] Gathering logs for kube-scheduler [560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e] ...
	I0719 19:39:09.889842   68617 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 560ffa4638473112cba82290891d8e27a61cce028ffa88296f23356dddd51f4e"
	I0719 19:39:08.056065   68995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:39:08.070370   68995 kubeadm.go:597] duration metric: took 4m4.520130366s to restartPrimaryControlPlane
	W0719 19:39:08.070441   68995 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 19:39:08.070473   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 19:39:12.432441   68617 system_pods.go:59] 8 kube-system pods found
	I0719 19:39:12.432467   68617 system_pods.go:61] "coredns-7db6d8ff4d-7h48w" [dd691a13-e571-4148-bef0-ba0552277312] Running
	I0719 19:39:12.432472   68617 system_pods.go:61] "etcd-embed-certs-889918" [68eae3c2-11de-4c70-bc28-1f34e1e868fc] Running
	I0719 19:39:12.432476   68617 system_pods.go:61] "kube-apiserver-embed-certs-889918" [48f50846-bc9e-444e-a569-98d28550dcff] Running
	I0719 19:39:12.432480   68617 system_pods.go:61] "kube-controller-manager-embed-certs-889918" [92e4da69-acef-44a2-9301-726c70294727] Running
	I0719 19:39:12.432484   68617 system_pods.go:61] "kube-proxy-5fcgs" [67ef345b-f2ef-4e18-a55c-91e2c08a37b8] Running
	I0719 19:39:12.432487   68617 system_pods.go:61] "kube-scheduler-embed-certs-889918" [f5d74873-1493-42f7-92a7-5d2b98929025] Running
	I0719 19:39:12.432493   68617 system_pods.go:61] "metrics-server-569cc877fc-sfc85" [fc2a1efe-1728-42d9-bd3d-9eb29af783e9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:39:12.432498   68617 system_pods.go:61] "storage-provisioner" [32274055-6a2d-4cf9-8ae8-3c5f7cdc66db] Running
	I0719 19:39:12.432503   68617 system_pods.go:74] duration metric: took 3.779953026s to wait for pod list to return data ...
	I0719 19:39:12.432509   68617 default_sa.go:34] waiting for default service account to be created ...
	I0719 19:39:12.434809   68617 default_sa.go:45] found service account: "default"
	I0719 19:39:12.434826   68617 default_sa.go:55] duration metric: took 2.3111ms for default service account to be created ...
	I0719 19:39:12.434833   68617 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 19:39:12.440652   68617 system_pods.go:86] 8 kube-system pods found
	I0719 19:39:12.440673   68617 system_pods.go:89] "coredns-7db6d8ff4d-7h48w" [dd691a13-e571-4148-bef0-ba0552277312] Running
	I0719 19:39:12.440678   68617 system_pods.go:89] "etcd-embed-certs-889918" [68eae3c2-11de-4c70-bc28-1f34e1e868fc] Running
	I0719 19:39:12.440683   68617 system_pods.go:89] "kube-apiserver-embed-certs-889918" [48f50846-bc9e-444e-a569-98d28550dcff] Running
	I0719 19:39:12.440689   68617 system_pods.go:89] "kube-controller-manager-embed-certs-889918" [92e4da69-acef-44a2-9301-726c70294727] Running
	I0719 19:39:12.440696   68617 system_pods.go:89] "kube-proxy-5fcgs" [67ef345b-f2ef-4e18-a55c-91e2c08a37b8] Running
	I0719 19:39:12.440707   68617 system_pods.go:89] "kube-scheduler-embed-certs-889918" [f5d74873-1493-42f7-92a7-5d2b98929025] Running
	I0719 19:39:12.440717   68617 system_pods.go:89] "metrics-server-569cc877fc-sfc85" [fc2a1efe-1728-42d9-bd3d-9eb29af783e9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:39:12.440728   68617 system_pods.go:89] "storage-provisioner" [32274055-6a2d-4cf9-8ae8-3c5f7cdc66db] Running
	I0719 19:39:12.440737   68617 system_pods.go:126] duration metric: took 5.899051ms to wait for k8s-apps to be running ...
	I0719 19:39:12.440748   68617 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 19:39:12.440789   68617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:39:12.459186   68617 system_svc.go:56] duration metric: took 18.42941ms WaitForService to wait for kubelet
	I0719 19:39:12.459214   68617 kubeadm.go:582] duration metric: took 4m23.472080305s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 19:39:12.459237   68617 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:39:12.462522   68617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:39:12.462542   68617 node_conditions.go:123] node cpu capacity is 2
	I0719 19:39:12.462552   68617 node_conditions.go:105] duration metric: took 3.31009ms to run NodePressure ...
	I0719 19:39:12.462568   68617 start.go:241] waiting for startup goroutines ...
	I0719 19:39:12.462577   68617 start.go:246] waiting for cluster config update ...
	I0719 19:39:12.462590   68617 start.go:255] writing updated cluster config ...
	I0719 19:39:12.462907   68617 ssh_runner.go:195] Run: rm -f paused
	I0719 19:39:12.529076   68617 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 19:39:12.531440   68617 out.go:177] * Done! kubectl is now configured to use "embed-certs-889918" cluster and "default" namespace by default
	I0719 19:39:08.251071   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:10.251424   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:12.252078   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:12.404491   68995 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.333989512s)
	I0719 19:39:12.404566   68995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:39:12.418839   68995 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:39:12.430297   68995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:39:12.440268   68995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:39:12.440287   68995 kubeadm.go:157] found existing configuration files:
	
	I0719 19:39:12.440335   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:39:12.449116   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:39:12.449184   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:39:12.460572   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:39:12.471305   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:39:12.471362   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:39:12.482167   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:39:12.493823   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:39:12.493905   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:39:12.505544   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:39:12.517178   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:39:12.517240   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:39:12.529271   68995 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 19:39:12.620678   68995 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0719 19:39:12.620745   68995 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 19:39:12.758666   68995 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 19:39:12.758855   68995 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 19:39:12.759023   68995 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 19:39:12.939168   68995 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 19:39:08.482667   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:10.983631   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:12.941111   68995 out.go:204]   - Generating certificates and keys ...
	I0719 19:39:12.941226   68995 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 19:39:12.941319   68995 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 19:39:12.941423   68995 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 19:39:12.941537   68995 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 19:39:12.941651   68995 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 19:39:12.941754   68995 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 19:39:12.941851   68995 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 19:39:12.941935   68995 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 19:39:12.942048   68995 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 19:39:12.942156   68995 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 19:39:12.942238   68995 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 19:39:12.942336   68995 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 19:39:13.024175   68995 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 19:39:13.148395   68995 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 19:39:13.342824   68995 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 19:39:13.440755   68995 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 19:39:13.455869   68995 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 19:39:13.456830   68995 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 19:39:13.456907   68995 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 19:39:13.600063   68995 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 19:39:13.602239   68995 out.go:204]   - Booting up control plane ...
	I0719 19:39:13.602381   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 19:39:13.608875   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 19:39:13.609791   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 19:39:13.610664   68995 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 19:39:13.612681   68995 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 19:39:14.252185   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:16.252337   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:13.484347   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:15.983299   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:17.983517   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:18.751634   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:21.252050   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:20.482791   68536 pod_ready.go:102] pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:21.477050   68536 pod_ready.go:81] duration metric: took 4m0.000605179s for pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace to be "Ready" ...
	E0719 19:39:21.477076   68536 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-wf82d" in "kube-system" namespace to be "Ready" (will not retry!)
	I0719 19:39:21.477094   68536 pod_ready.go:38] duration metric: took 4m14.545379745s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:39:21.477123   68536 kubeadm.go:597] duration metric: took 5m1.629449662s to restartPrimaryControlPlane
	W0719 19:39:21.477193   68536 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 19:39:21.477215   68536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 19:39:23.753986   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:26.251761   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:28.252580   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:30.755898   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:33.251879   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:35.750822   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:37.752455   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:40.251762   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:42.752421   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:45.251253   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:47.752059   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:50.253519   68019 pod_ready.go:102] pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace has status "Ready":"False"
	I0719 19:39:51.745969   68019 pod_ready.go:81] duration metric: took 4m0.000948888s for pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace to be "Ready" ...
	E0719 19:39:51.746010   68019 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-j78vm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0719 19:39:51.746030   68019 pod_ready.go:38] duration metric: took 4m12.537161917s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:39:51.746056   68019 kubeadm.go:597] duration metric: took 4m19.596455979s to restartPrimaryControlPlane
	W0719 19:39:51.746106   68019 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 19:39:51.746131   68019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 19:39:52.520817   68536 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.043581976s)
	I0719 19:39:52.520887   68536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:39:52.536043   68536 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:39:52.545993   68536 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:39:52.555648   68536 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:39:52.555669   68536 kubeadm.go:157] found existing configuration files:
	
	I0719 19:39:52.555711   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0719 19:39:52.564602   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:39:52.564712   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:39:52.573899   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0719 19:39:52.582537   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:39:52.582588   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:39:52.591303   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0719 19:39:52.600820   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:39:52.600868   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:39:52.610312   68536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0719 19:39:52.618924   68536 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:39:52.618970   68536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:39:52.627840   68536 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 19:39:52.683442   68536 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0719 19:39:52.683530   68536 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 19:39:52.826305   68536 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 19:39:52.826452   68536 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 19:39:52.826620   68536 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 19:39:53.042595   68536 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 19:39:53.044702   68536 out.go:204]   - Generating certificates and keys ...
	I0719 19:39:53.044784   68536 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 19:39:53.044870   68536 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 19:39:53.044980   68536 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 19:39:53.045077   68536 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 19:39:53.045156   68536 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 19:39:53.045204   68536 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 19:39:53.045263   68536 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 19:39:53.045340   68536 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 19:39:53.045435   68536 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 19:39:53.045541   68536 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 19:39:53.045606   68536 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 19:39:53.045764   68536 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 19:39:53.222737   68536 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 19:39:53.489072   68536 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 19:39:53.740066   68536 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 19:39:53.822505   68536 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 19:39:53.971335   68536 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 19:39:53.971975   68536 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 19:39:53.976245   68536 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 19:39:53.614108   68995 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0719 19:39:53.614700   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:39:53.614946   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:39:53.978300   68536 out.go:204]   - Booting up control plane ...
	I0719 19:39:53.978437   68536 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 19:39:53.978565   68536 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 19:39:53.978663   68536 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 19:39:53.996201   68536 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 19:39:53.997047   68536 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 19:39:53.997148   68536 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 19:39:54.130113   68536 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 19:39:54.130245   68536 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 19:39:55.132570   68536 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002320915s
	I0719 19:39:55.132686   68536 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 19:39:59.636140   68536 kubeadm.go:310] [api-check] The API server is healthy after 4.50221298s
	I0719 19:39:59.654383   68536 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 19:39:59.676263   68536 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 19:39:59.700938   68536 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 19:39:59.701195   68536 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-578533 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 19:39:59.712009   68536 kubeadm.go:310] [bootstrap-token] Using token: kew56y.gotqvfalggpgzbky
	I0719 19:39:59.713486   68536 out.go:204]   - Configuring RBAC rules ...
	I0719 19:39:59.713629   68536 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 19:39:59.720605   68536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 19:39:59.726901   68536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 19:39:59.730021   68536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 19:39:59.733042   68536 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 19:39:59.736260   68536 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 19:40:00.042122   68536 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 19:40:00.496191   68536 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 19:40:01.043927   68536 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 19:40:01.043969   68536 kubeadm.go:310] 
	I0719 19:40:01.044072   68536 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 19:40:01.044086   68536 kubeadm.go:310] 
	I0719 19:40:01.044173   68536 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 19:40:01.044181   68536 kubeadm.go:310] 
	I0719 19:40:01.044201   68536 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 19:40:01.044250   68536 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 19:40:01.044339   68536 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 19:40:01.044356   68536 kubeadm.go:310] 
	I0719 19:40:01.044493   68536 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 19:40:01.044513   68536 kubeadm.go:310] 
	I0719 19:40:01.044584   68536 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 19:40:01.044594   68536 kubeadm.go:310] 
	I0719 19:40:01.044670   68536 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 19:40:01.044774   68536 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 19:40:01.044874   68536 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 19:40:01.044888   68536 kubeadm.go:310] 
	I0719 19:40:01.044975   68536 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 19:40:01.045047   68536 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 19:40:01.045058   68536 kubeadm.go:310] 
	I0719 19:40:01.045169   68536 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token kew56y.gotqvfalggpgzbky \
	I0719 19:40:01.045331   68536 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 \
	I0719 19:40:01.045367   68536 kubeadm.go:310] 	--control-plane 
	I0719 19:40:01.045372   68536 kubeadm.go:310] 
	I0719 19:40:01.045496   68536 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 19:40:01.045506   68536 kubeadm.go:310] 
	I0719 19:40:01.045607   68536 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token kew56y.gotqvfalggpgzbky \
	I0719 19:40:01.045766   68536 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 
	I0719 19:40:01.045950   68536 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 19:40:01.045974   68536 cni.go:84] Creating CNI manager for ""
	I0719 19:40:01.045983   68536 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:40:01.047577   68536 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 19:39:58.615471   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:39:58.615687   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:40:01.048902   68536 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 19:40:01.061165   68536 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 19:40:01.083745   68536 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 19:40:01.083880   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:01.083893   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-578533 minikube.k8s.io/updated_at=2024_07_19T19_40_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221 minikube.k8s.io/name=default-k8s-diff-port-578533 minikube.k8s.io/primary=true
	I0719 19:40:01.115489   68536 ops.go:34] apiserver oom_adj: -16
	I0719 19:40:01.288776   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:01.789023   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:02.289623   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:02.789147   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:03.289112   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:03.789167   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:04.289092   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:04.788811   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:05.289732   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:05.789486   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:06.289062   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:06.789641   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:07.289713   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:07.789100   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:08.289752   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:08.616294   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:40:08.616549   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:40:08.789060   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:09.288783   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:09.789011   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:10.289587   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:10.789028   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:11.288947   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:11.789756   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:12.288867   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:12.789023   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:13.289682   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:13.789104   68536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:13.872731   68536 kubeadm.go:1113] duration metric: took 12.788952534s to wait for elevateKubeSystemPrivileges
	I0719 19:40:13.872760   68536 kubeadm.go:394] duration metric: took 5m54.069849946s to StartCluster
	I0719 19:40:13.872777   68536 settings.go:142] acquiring lock: {Name:mkcf95271f2ac91a55c2f4ae0f8a0ccaa24777be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:40:13.872848   68536 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:40:13.874598   68536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/kubeconfig: {Name:mk0bbb5f0de0e816a151363ad4427bbf9de52f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:40:13.874875   68536 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.104 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 19:40:13.875049   68536 config.go:182] Loaded profile config "default-k8s-diff-port-578533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:40:13.875047   68536 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 19:40:13.875107   68536 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-578533"
	I0719 19:40:13.875116   68536 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-578533"
	I0719 19:40:13.875140   68536 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-578533"
	I0719 19:40:13.875142   68536 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-578533"
	I0719 19:40:13.875198   68536 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-578533"
	W0719 19:40:13.875211   68536 addons.go:243] addon metrics-server should already be in state true
	I0719 19:40:13.875141   68536 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-578533"
	W0719 19:40:13.875237   68536 addons.go:243] addon storage-provisioner should already be in state true
	I0719 19:40:13.875252   68536 host.go:66] Checking if "default-k8s-diff-port-578533" exists ...
	I0719 19:40:13.875272   68536 host.go:66] Checking if "default-k8s-diff-port-578533" exists ...
	I0719 19:40:13.875610   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.875620   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.875641   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.875648   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.875760   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.875808   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.876607   68536 out.go:177] * Verifying Kubernetes components...
	I0719 19:40:13.878074   68536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:40:13.891861   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44441
	I0719 19:40:13.891871   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33215
	I0719 19:40:13.892275   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.892348   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.892756   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.892776   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.892861   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.892880   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.893123   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.893188   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.893600   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.893620   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.893708   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.893741   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.895892   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33737
	I0719 19:40:13.896304   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.896825   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.896849   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.897241   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.897455   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetState
	I0719 19:40:13.901471   68536 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-578533"
	W0719 19:40:13.901492   68536 addons.go:243] addon default-storageclass should already be in state true
	I0719 19:40:13.901520   68536 host.go:66] Checking if "default-k8s-diff-port-578533" exists ...
	I0719 19:40:13.901886   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.901916   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.909214   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44955
	I0719 19:40:13.909602   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.909706   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41381
	I0719 19:40:13.910014   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.910034   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.910088   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.910307   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.910493   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetState
	I0719 19:40:13.910626   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.910647   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.911481   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.911755   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetState
	I0719 19:40:13.913629   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:40:13.914010   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:40:13.915656   68536 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0719 19:40:13.915658   68536 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:40:13.916749   68536 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 19:40:13.916766   68536 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 19:40:13.916796   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:40:13.917023   68536 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:40:13.917042   68536 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 19:40:13.917059   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:40:13.920356   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:40:13.921022   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:40:13.921051   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:40:13.921100   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:40:13.921342   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:40:13.921505   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:40:13.921690   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:40:13.921838   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:40:13.922025   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:40:13.922047   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:40:13.922369   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:40:13.922515   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:40:13.922679   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:40:13.922815   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:40:13.926929   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43701
	I0719 19:40:13.927323   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.927751   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.927771   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.928249   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.928833   68536 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:13.928858   68536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:13.944683   68536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33103
	I0719 19:40:13.945111   68536 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:13.945633   68536 main.go:141] libmachine: Using API Version  1
	I0719 19:40:13.945660   68536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:13.946161   68536 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:13.946386   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetState
	I0719 19:40:13.948243   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .DriverName
	I0719 19:40:13.948454   68536 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 19:40:13.948469   68536 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 19:40:13.948490   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHHostname
	I0719 19:40:13.951289   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:40:13.951582   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:b3:02", ip: ""} in network mk-default-k8s-diff-port-578533: {Iface:virbr4 ExpiryTime:2024-07-19 20:34:05 +0000 UTC Type:0 Mac:52:54:00:85:b3:02 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-578533 Clientid:01:52:54:00:85:b3:02}
	I0719 19:40:13.951617   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | domain default-k8s-diff-port-578533 has defined IP address 192.168.72.104 and MAC address 52:54:00:85:b3:02 in network mk-default-k8s-diff-port-578533
	I0719 19:40:13.951803   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHPort
	I0719 19:40:13.952029   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHKeyPath
	I0719 19:40:13.952201   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .GetSSHUsername
	I0719 19:40:13.952391   68536 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/default-k8s-diff-port-578533/id_rsa Username:docker}
	I0719 19:40:14.070828   68536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:40:14.091145   68536 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-578533" to be "Ready" ...
	I0719 19:40:14.110080   68536 node_ready.go:49] node "default-k8s-diff-port-578533" has status "Ready":"True"
	I0719 19:40:14.110107   68536 node_ready.go:38] duration metric: took 18.918971ms for node "default-k8s-diff-port-578533" to be "Ready" ...
	I0719 19:40:14.110119   68536 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:40:14.119802   68536 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.126113   68536 pod_ready.go:92] pod "etcd-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:14.126142   68536 pod_ready.go:81] duration metric: took 6.313891ms for pod "etcd-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.126155   68536 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.141789   68536 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:14.141819   68536 pod_ready.go:81] duration metric: took 15.656544ms for pod "kube-apiserver-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.141834   68536 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.156519   68536 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:14.156551   68536 pod_ready.go:81] duration metric: took 14.707971ms for pod "kube-controller-manager-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.156565   68536 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xd6jl" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:14.204678   68536 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 19:40:14.204699   68536 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0719 19:40:14.243288   68536 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 19:40:14.245353   68536 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 19:40:14.245379   68536 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 19:40:14.251097   68536 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:40:14.282349   68536 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 19:40:14.282380   68536 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 19:40:14.324904   68536 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 19:40:14.538669   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:14.538706   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:14.539085   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Closing plugin on server side
	I0719 19:40:14.539097   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:14.539112   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:14.539127   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:14.539141   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:14.539429   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:14.539450   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:14.539487   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Closing plugin on server side
	I0719 19:40:14.544896   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:14.544916   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:14.545172   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Closing plugin on server side
	I0719 19:40:14.545177   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:14.545191   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:15.063896   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:15.063917   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:15.064241   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:15.064253   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) DBG | Closing plugin on server side
	I0719 19:40:15.064259   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:15.064276   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:15.064284   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:15.064494   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:15.064517   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:15.658117   68536 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.333159998s)
	I0719 19:40:15.658174   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:15.658189   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:15.658470   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:15.658491   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:15.658502   68536 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:15.658512   68536 main.go:141] libmachine: (default-k8s-diff-port-578533) Calling .Close
	I0719 19:40:15.658736   68536 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:15.658753   68536 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:15.658764   68536 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-578533"
	I0719 19:40:15.661136   68536 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0719 19:40:15.662344   68536 addons.go:510] duration metric: took 1.787285769s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0719 19:40:15.679066   68536 pod_ready.go:92] pod "kube-proxy-xd6jl" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:15.679088   68536 pod_ready.go:81] duration metric: took 1.52251532s for pod "kube-proxy-xd6jl" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:15.679114   68536 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:15.695199   68536 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-578533" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:15.695221   68536 pod_ready.go:81] duration metric: took 16.099506ms for pod "kube-scheduler-default-k8s-diff-port-578533" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:15.695229   68536 pod_ready.go:38] duration metric: took 1.585098776s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:40:15.695241   68536 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:40:15.695297   68536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:40:15.746226   68536 api_server.go:72] duration metric: took 1.871305968s to wait for apiserver process to appear ...
	I0719 19:40:15.746254   68536 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:40:15.746277   68536 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0719 19:40:15.761521   68536 api_server.go:279] https://192.168.72.104:8444/healthz returned 200:
	ok
	I0719 19:40:15.762619   68536 api_server.go:141] control plane version: v1.30.3
	I0719 19:40:15.762646   68536 api_server.go:131] duration metric: took 16.383582ms to wait for apiserver health ...
	I0719 19:40:15.762656   68536 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:40:15.900579   68536 system_pods.go:59] 9 kube-system pods found
	I0719 19:40:15.900627   68536 system_pods.go:61] "coredns-7db6d8ff4d-6qndw" [2b80cbbf-e1de-449d-affb-90c7e626e11a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 19:40:15.900639   68536 system_pods.go:61] "coredns-7db6d8ff4d-gmljj" [e3c23a75-7c98-4704-b159-465a46e71deb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 19:40:15.900648   68536 system_pods.go:61] "etcd-default-k8s-diff-port-578533" [2667fb3b-07c7-4feb-9c47-aec61d610319] Running
	I0719 19:40:15.900657   68536 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-578533" [54d5ed69-0572-4d75-b2c2-868a2d48f048] Running
	I0719 19:40:15.900664   68536 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-578533" [ba65a51a-7ce6-4ba9-ad87-b2763c49732f] Running
	I0719 19:40:15.900669   68536 system_pods.go:61] "kube-proxy-xd6jl" [699a18fd-c6e6-4c85-9799-0630bb7932b9] Running
	I0719 19:40:15.900675   68536 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-578533" [28fe9666-6e36-479a-9908-0ed40fbe9655] Running
	I0719 19:40:15.900687   68536 system_pods.go:61] "metrics-server-569cc877fc-kz2bz" [dc109926-e072-47e1-944c-a9614741fdaa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:40:15.900701   68536 system_pods.go:61] "storage-provisioner" [732e14b5-3120-491f-be0f-2d2ffcd7a799] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 19:40:15.900711   68536 system_pods.go:74] duration metric: took 138.04766ms to wait for pod list to return data ...
	I0719 19:40:15.900726   68536 default_sa.go:34] waiting for default service account to be created ...
	I0719 19:40:16.094864   68536 default_sa.go:45] found service account: "default"
	I0719 19:40:16.094889   68536 default_sa.go:55] duration metric: took 194.152947ms for default service account to be created ...
	I0719 19:40:16.094899   68536 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 19:40:16.297477   68536 system_pods.go:86] 9 kube-system pods found
	I0719 19:40:16.297508   68536 system_pods.go:89] "coredns-7db6d8ff4d-6qndw" [2b80cbbf-e1de-449d-affb-90c7e626e11a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 19:40:16.297515   68536 system_pods.go:89] "coredns-7db6d8ff4d-gmljj" [e3c23a75-7c98-4704-b159-465a46e71deb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 19:40:16.297521   68536 system_pods.go:89] "etcd-default-k8s-diff-port-578533" [2667fb3b-07c7-4feb-9c47-aec61d610319] Running
	I0719 19:40:16.297526   68536 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-578533" [54d5ed69-0572-4d75-b2c2-868a2d48f048] Running
	I0719 19:40:16.297531   68536 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-578533" [ba65a51a-7ce6-4ba9-ad87-b2763c49732f] Running
	I0719 19:40:16.297535   68536 system_pods.go:89] "kube-proxy-xd6jl" [699a18fd-c6e6-4c85-9799-0630bb7932b9] Running
	I0719 19:40:16.297539   68536 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-578533" [28fe9666-6e36-479a-9908-0ed40fbe9655] Running
	I0719 19:40:16.297543   68536 system_pods.go:89] "metrics-server-569cc877fc-kz2bz" [dc109926-e072-47e1-944c-a9614741fdaa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:40:16.297550   68536 system_pods.go:89] "storage-provisioner" [732e14b5-3120-491f-be0f-2d2ffcd7a799] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 19:40:16.297572   68536 retry.go:31] will retry after 294.572454ms: missing components: kube-dns
	I0719 19:40:16.599274   68536 system_pods.go:86] 9 kube-system pods found
	I0719 19:40:16.599306   68536 system_pods.go:89] "coredns-7db6d8ff4d-6qndw" [2b80cbbf-e1de-449d-affb-90c7e626e11a] Running
	I0719 19:40:16.599315   68536 system_pods.go:89] "coredns-7db6d8ff4d-gmljj" [e3c23a75-7c98-4704-b159-465a46e71deb] Running
	I0719 19:40:16.599343   68536 system_pods.go:89] "etcd-default-k8s-diff-port-578533" [2667fb3b-07c7-4feb-9c47-aec61d610319] Running
	I0719 19:40:16.599351   68536 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-578533" [54d5ed69-0572-4d75-b2c2-868a2d48f048] Running
	I0719 19:40:16.599358   68536 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-578533" [ba65a51a-7ce6-4ba9-ad87-b2763c49732f] Running
	I0719 19:40:16.599365   68536 system_pods.go:89] "kube-proxy-xd6jl" [699a18fd-c6e6-4c85-9799-0630bb7932b9] Running
	I0719 19:40:16.599371   68536 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-578533" [28fe9666-6e36-479a-9908-0ed40fbe9655] Running
	I0719 19:40:16.599386   68536 system_pods.go:89] "metrics-server-569cc877fc-kz2bz" [dc109926-e072-47e1-944c-a9614741fdaa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:40:16.599392   68536 system_pods.go:89] "storage-provisioner" [732e14b5-3120-491f-be0f-2d2ffcd7a799] Running
	I0719 19:40:16.599408   68536 system_pods.go:126] duration metric: took 504.501449ms to wait for k8s-apps to be running ...
	I0719 19:40:16.599418   68536 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 19:40:16.599461   68536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:40:16.613644   68536 system_svc.go:56] duration metric: took 14.216133ms WaitForService to wait for kubelet
	I0719 19:40:16.613673   68536 kubeadm.go:582] duration metric: took 2.73876141s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 19:40:16.613691   68536 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:40:16.695470   68536 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:40:16.695494   68536 node_conditions.go:123] node cpu capacity is 2
	I0719 19:40:16.695503   68536 node_conditions.go:105] duration metric: took 81.807505ms to run NodePressure ...
	I0719 19:40:16.695513   68536 start.go:241] waiting for startup goroutines ...
	I0719 19:40:16.695523   68536 start.go:246] waiting for cluster config update ...
	I0719 19:40:16.695533   68536 start.go:255] writing updated cluster config ...
	I0719 19:40:16.695817   68536 ssh_runner.go:195] Run: rm -f paused
	I0719 19:40:16.743733   68536 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 19:40:16.746063   68536 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-578533" cluster and "default" namespace by default
	I0719 19:40:17.805890   68019 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.059736517s)
	I0719 19:40:17.805973   68019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:40:17.821097   68019 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 19:40:17.831114   68019 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:40:17.840896   68019 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:40:17.840926   68019 kubeadm.go:157] found existing configuration files:
	
	I0719 19:40:17.840974   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:40:17.850003   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:40:17.850059   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:40:17.859304   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:40:17.868663   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:40:17.868711   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:40:17.877527   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:40:17.886554   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:40:17.886608   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:40:17.895713   68019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:40:17.904287   68019 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:40:17.904352   68019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:40:17.913115   68019 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 19:40:17.956895   68019 kubeadm.go:310] W0719 19:40:17.928980    2926 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0719 19:40:17.958338   68019 kubeadm.go:310] W0719 19:40:17.930488    2926 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0719 19:40:18.073705   68019 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 19:40:26.630439   68019 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0719 19:40:26.630525   68019 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 19:40:26.630649   68019 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 19:40:26.630765   68019 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 19:40:26.630913   68019 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0719 19:40:26.631023   68019 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 19:40:26.632660   68019 out.go:204]   - Generating certificates and keys ...
	I0719 19:40:26.632745   68019 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 19:40:26.632833   68019 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 19:40:26.632906   68019 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 19:40:26.632975   68019 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 19:40:26.633093   68019 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 19:40:26.633184   68019 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 19:40:26.633270   68019 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 19:40:26.633378   68019 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 19:40:26.633474   68019 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 19:40:26.633572   68019 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 19:40:26.633627   68019 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 19:40:26.633705   68019 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 19:40:26.633780   68019 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 19:40:26.633868   68019 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 19:40:26.633942   68019 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 19:40:26.634024   68019 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 19:40:26.634119   68019 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 19:40:26.634250   68019 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 19:40:26.634354   68019 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 19:40:26.635957   68019 out.go:204]   - Booting up control plane ...
	I0719 19:40:26.636062   68019 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 19:40:26.636163   68019 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 19:40:26.636259   68019 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 19:40:26.636381   68019 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 19:40:26.636505   68019 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 19:40:26.636570   68019 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 19:40:26.636727   68019 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 19:40:26.636813   68019 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 19:40:26.636888   68019 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002206871s
	I0719 19:40:26.637000   68019 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 19:40:26.637088   68019 kubeadm.go:310] [api-check] The API server is healthy after 5.002825608s
	I0719 19:40:26.637239   68019 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 19:40:26.637387   68019 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 19:40:26.637470   68019 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 19:40:26.637681   68019 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-971041 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 19:40:26.637737   68019 kubeadm.go:310] [bootstrap-token] Using token: r7dsc0.erulzfzieqtgxnlo
	I0719 19:40:26.638982   68019 out.go:204]   - Configuring RBAC rules ...
	I0719 19:40:26.639079   68019 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 19:40:26.639165   68019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 19:40:26.639306   68019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 19:40:26.639427   68019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 19:40:26.639522   68019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 19:40:26.639599   68019 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 19:40:26.639692   68019 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 19:40:26.639731   68019 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 19:40:26.639770   68019 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 19:40:26.639779   68019 kubeadm.go:310] 
	I0719 19:40:26.639859   68019 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 19:40:26.639868   68019 kubeadm.go:310] 
	I0719 19:40:26.639935   68019 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 19:40:26.639941   68019 kubeadm.go:310] 
	I0719 19:40:26.639974   68019 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 19:40:26.640070   68019 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 19:40:26.640145   68019 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 19:40:26.640161   68019 kubeadm.go:310] 
	I0719 19:40:26.640242   68019 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 19:40:26.640249   68019 kubeadm.go:310] 
	I0719 19:40:26.640288   68019 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 19:40:26.640303   68019 kubeadm.go:310] 
	I0719 19:40:26.640386   68019 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 19:40:26.640476   68019 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 19:40:26.640572   68019 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 19:40:26.640582   68019 kubeadm.go:310] 
	I0719 19:40:26.640655   68019 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 19:40:26.640733   68019 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 19:40:26.640745   68019 kubeadm.go:310] 
	I0719 19:40:26.640827   68019 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token r7dsc0.erulzfzieqtgxnlo \
	I0719 19:40:26.640965   68019 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 \
	I0719 19:40:26.641007   68019 kubeadm.go:310] 	--control-plane 
	I0719 19:40:26.641015   68019 kubeadm.go:310] 
	I0719 19:40:26.641105   68019 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 19:40:26.641117   68019 kubeadm.go:310] 
	I0719 19:40:26.641187   68019 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token r7dsc0.erulzfzieqtgxnlo \
	I0719 19:40:26.641282   68019 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa68adac1b380a2a3a35663ddc6173ce984e71b3443233941381fdc37e32a0b4 
	I0719 19:40:26.641293   68019 cni.go:84] Creating CNI manager for ""
	I0719 19:40:26.641302   68019 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 19:40:26.642788   68019 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 19:40:26.643983   68019 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 19:40:26.655231   68019 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 19:40:26.673487   68019 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 19:40:26.673555   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:26.673573   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-971041 minikube.k8s.io/updated_at=2024_07_19T19_40_26_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221 minikube.k8s.io/name=no-preload-971041 minikube.k8s.io/primary=true
	I0719 19:40:26.702916   68019 ops.go:34] apiserver oom_adj: -16
	I0719 19:40:26.871642   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:27.372636   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:27.871695   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:28.372395   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:28.871975   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:29.371869   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:29.872364   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:30.372618   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:30.871974   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:31.372607   68019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 19:40:31.484697   68019 kubeadm.go:1113] duration metric: took 4.811203988s to wait for elevateKubeSystemPrivileges
	I0719 19:40:31.484735   68019 kubeadm.go:394] duration metric: took 4m59.38336524s to StartCluster
	I0719 19:40:31.484758   68019 settings.go:142] acquiring lock: {Name:mkcf95271f2ac91a55c2f4ae0f8a0ccaa24777be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:40:31.484852   68019 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:40:31.487571   68019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/kubeconfig: {Name:mk0bbb5f0de0e816a151363ad4427bbf9de52f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 19:40:31.487903   68019 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.119 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 19:40:31.488046   68019 config.go:182] Loaded profile config "no-preload-971041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 19:40:31.488017   68019 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 19:40:31.488091   68019 addons.go:69] Setting storage-provisioner=true in profile "no-preload-971041"
	I0719 19:40:31.488105   68019 addons.go:69] Setting default-storageclass=true in profile "no-preload-971041"
	I0719 19:40:31.488136   68019 addons.go:234] Setting addon storage-provisioner=true in "no-preload-971041"
	I0719 19:40:31.488136   68019 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-971041"
	W0719 19:40:31.488145   68019 addons.go:243] addon storage-provisioner should already be in state true
	I0719 19:40:31.488173   68019 host.go:66] Checking if "no-preload-971041" exists ...
	I0719 19:40:31.488097   68019 addons.go:69] Setting metrics-server=true in profile "no-preload-971041"
	I0719 19:40:31.488220   68019 addons.go:234] Setting addon metrics-server=true in "no-preload-971041"
	W0719 19:40:31.488229   68019 addons.go:243] addon metrics-server should already be in state true
	I0719 19:40:31.488253   68019 host.go:66] Checking if "no-preload-971041" exists ...
	I0719 19:40:31.488560   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.488566   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.488568   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.488588   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.488589   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.488590   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.489333   68019 out.go:177] * Verifying Kubernetes components...
	I0719 19:40:31.490709   68019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 19:40:31.505073   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35949
	I0719 19:40:31.505121   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36213
	I0719 19:40:31.505243   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34987
	I0719 19:40:31.505554   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.505575   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.505651   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.506136   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.506161   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.506138   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.506177   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.506369   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.506390   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.506521   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.506582   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.506762   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.506950   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetState
	I0719 19:40:31.507074   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.507101   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.507097   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.507134   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.510086   68019 addons.go:234] Setting addon default-storageclass=true in "no-preload-971041"
	W0719 19:40:31.510104   68019 addons.go:243] addon default-storageclass should already be in state true
	I0719 19:40:31.510154   68019 host.go:66] Checking if "no-preload-971041" exists ...
	I0719 19:40:31.510697   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.510727   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.523120   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36047
	I0719 19:40:31.523463   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45285
	I0719 19:40:31.523899   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.523950   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.524463   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.524479   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.524580   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.524590   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.524931   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.524973   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.525131   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetState
	I0719 19:40:31.525164   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetState
	I0719 19:40:31.526929   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:40:31.527498   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:40:31.527764   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42975
	I0719 19:40:31.528531   68019 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0719 19:40:31.528562   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.529323   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.529340   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.529389   68019 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 19:40:28.617451   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:40:28.617640   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:40:31.529719   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.530357   68019 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19307-14841/.minikube/bin/docker-machine-driver-kvm2
	I0719 19:40:31.530369   68019 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 19:40:31.530383   68019 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:40:31.530386   68019 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 19:40:31.530405   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:40:31.531478   68019 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:40:31.531494   68019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 19:40:31.531509   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:40:31.533843   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:40:31.534251   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:40:31.534279   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:40:31.534542   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:40:31.534701   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:40:31.534827   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:40:31.534968   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:40:31.538822   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:40:31.539296   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:40:31.539468   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:40:31.539598   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:40:31.539745   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:40:31.539894   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:40:31.540051   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:40:31.547930   68019 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40325
	I0719 19:40:31.548415   68019 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:40:31.548833   68019 main.go:141] libmachine: Using API Version  1
	I0719 19:40:31.548852   68019 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:40:31.549151   68019 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:40:31.549309   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetState
	I0719 19:40:31.550633   68019 main.go:141] libmachine: (no-preload-971041) Calling .DriverName
	I0719 19:40:31.550855   68019 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 19:40:31.550869   68019 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 19:40:31.550881   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHHostname
	I0719 19:40:31.553528   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:40:31.554109   68019 main.go:141] libmachine: (no-preload-971041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:b8:bd", ip: ""} in network mk-no-preload-971041: {Iface:virbr3 ExpiryTime:2024-07-19 20:35:06 +0000 UTC Type:0 Mac:52:54:00:72:b8:bd Iaid: IPaddr:192.168.50.119 Prefix:24 Hostname:no-preload-971041 Clientid:01:52:54:00:72:b8:bd}
	I0719 19:40:31.554134   68019 main.go:141] libmachine: (no-preload-971041) DBG | domain no-preload-971041 has defined IP address 192.168.50.119 and MAC address 52:54:00:72:b8:bd in network mk-no-preload-971041
	I0719 19:40:31.554298   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHPort
	I0719 19:40:31.554650   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHKeyPath
	I0719 19:40:31.554842   68019 main.go:141] libmachine: (no-preload-971041) Calling .GetSSHUsername
	I0719 19:40:31.554962   68019 sshutil.go:53] new ssh client: &{IP:192.168.50.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/no-preload-971041/id_rsa Username:docker}
	I0719 19:40:31.716869   68019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 19:40:31.740991   68019 node_ready.go:35] waiting up to 6m0s for node "no-preload-971041" to be "Ready" ...
	I0719 19:40:31.749672   68019 node_ready.go:49] node "no-preload-971041" has status "Ready":"True"
	I0719 19:40:31.749700   68019 node_ready.go:38] duration metric: took 8.681441ms for node "no-preload-971041" to be "Ready" ...
	I0719 19:40:31.749713   68019 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:40:31.755037   68019 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-7kdqd" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:31.865867   68019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 19:40:31.866793   68019 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 19:40:31.866807   68019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0719 19:40:31.876604   68019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 19:40:31.890924   68019 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 19:40:31.890947   68019 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 19:40:31.930299   68019 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 19:40:31.930326   68019 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 19:40:31.982650   68019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 19:40:32.488961   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:32.488991   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:32.489025   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:32.489051   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:32.489355   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:32.489372   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:32.489382   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:32.489390   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:32.489398   68019 main.go:141] libmachine: (no-preload-971041) DBG | Closing plugin on server side
	I0719 19:40:32.489424   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:32.489431   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:32.489439   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:32.489446   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:32.489619   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:32.489639   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:32.489676   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:32.489689   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:32.489690   68019 main.go:141] libmachine: (no-preload-971041) DBG | Closing plugin on server side
	I0719 19:40:32.506595   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:32.506613   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:32.506974   68019 main.go:141] libmachine: (no-preload-971041) DBG | Closing plugin on server side
	I0719 19:40:32.506985   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:32.507001   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:33.019915   68019 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.037215511s)
	I0719 19:40:33.019974   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:33.019991   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:33.020393   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:33.020409   68019 main.go:141] libmachine: (no-preload-971041) DBG | Closing plugin on server side
	I0719 19:40:33.020415   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:33.020433   68019 main.go:141] libmachine: Making call to close driver server
	I0719 19:40:33.020442   68019 main.go:141] libmachine: (no-preload-971041) Calling .Close
	I0719 19:40:33.020806   68019 main.go:141] libmachine: (no-preload-971041) DBG | Closing plugin on server side
	I0719 19:40:33.020860   68019 main.go:141] libmachine: Successfully made call to close driver server
	I0719 19:40:33.020875   68019 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 19:40:33.020893   68019 addons.go:475] Verifying addon metrics-server=true in "no-preload-971041"
	I0719 19:40:33.023369   68019 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0719 19:40:33.024402   68019 addons.go:510] duration metric: took 1.536389874s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0719 19:40:33.763711   68019 pod_ready.go:102] pod "coredns-5cfdc65f69-7kdqd" in "kube-system" namespace has status "Ready":"False"
	I0719 19:40:36.262760   68019 pod_ready.go:102] pod "coredns-5cfdc65f69-7kdqd" in "kube-system" namespace has status "Ready":"False"
	I0719 19:40:38.761973   68019 pod_ready.go:102] pod "coredns-5cfdc65f69-7kdqd" in "kube-system" namespace has status "Ready":"False"
	I0719 19:40:39.261665   68019 pod_ready.go:92] pod "coredns-5cfdc65f69-7kdqd" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:39.261686   68019 pod_ready.go:81] duration metric: took 7.506626451s for pod "coredns-5cfdc65f69-7kdqd" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.261696   68019 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-d5xr6" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.268952   68019 pod_ready.go:92] pod "coredns-5cfdc65f69-d5xr6" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:39.268968   68019 pod_ready.go:81] duration metric: took 7.267035ms for pod "coredns-5cfdc65f69-d5xr6" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.268976   68019 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.274245   68019 pod_ready.go:92] pod "etcd-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:39.274263   68019 pod_ready.go:81] duration metric: took 5.279919ms for pod "etcd-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.274273   68019 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.278452   68019 pod_ready.go:92] pod "kube-apiserver-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:39.278469   68019 pod_ready.go:81] duration metric: took 4.189107ms for pod "kube-apiserver-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:39.278479   68019 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:40.285769   68019 pod_ready.go:92] pod "kube-controller-manager-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:40.285797   68019 pod_ready.go:81] duration metric: took 1.007309472s for pod "kube-controller-manager-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:40.285814   68019 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-445wh" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:40.459001   68019 pod_ready.go:92] pod "kube-proxy-445wh" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:40.459024   68019 pod_ready.go:81] duration metric: took 173.203584ms for pod "kube-proxy-445wh" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:40.459033   68019 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:40.858861   68019 pod_ready.go:92] pod "kube-scheduler-no-preload-971041" in "kube-system" namespace has status "Ready":"True"
	I0719 19:40:40.858890   68019 pod_ready.go:81] duration metric: took 399.849812ms for pod "kube-scheduler-no-preload-971041" in "kube-system" namespace to be "Ready" ...
	I0719 19:40:40.858901   68019 pod_ready.go:38] duration metric: took 9.109176659s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 19:40:40.858919   68019 api_server.go:52] waiting for apiserver process to appear ...
	I0719 19:40:40.858980   68019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:40:40.873763   68019 api_server.go:72] duration metric: took 9.385790935s to wait for apiserver process to appear ...
	I0719 19:40:40.873791   68019 api_server.go:88] waiting for apiserver healthz status ...
	I0719 19:40:40.873813   68019 api_server.go:253] Checking apiserver healthz at https://192.168.50.119:8443/healthz ...
	I0719 19:40:40.878709   68019 api_server.go:279] https://192.168.50.119:8443/healthz returned 200:
	ok
	I0719 19:40:40.879656   68019 api_server.go:141] control plane version: v1.31.0-beta.0
	I0719 19:40:40.879677   68019 api_server.go:131] duration metric: took 5.8793ms to wait for apiserver health ...
	I0719 19:40:40.879686   68019 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 19:40:41.062525   68019 system_pods.go:59] 9 kube-system pods found
	I0719 19:40:41.062553   68019 system_pods.go:61] "coredns-5cfdc65f69-7kdqd" [1b0eeec5-f993-4d90-a3e0-5254bcc39f64] Running
	I0719 19:40:41.062561   68019 system_pods.go:61] "coredns-5cfdc65f69-d5xr6" [5f5a96c3-84ef-429c-b31d-472fcc247862] Running
	I0719 19:40:41.062565   68019 system_pods.go:61] "etcd-no-preload-971041" [8852dc62-d441-43c0-97e5-6fb333ee3d26] Running
	I0719 19:40:41.062569   68019 system_pods.go:61] "kube-apiserver-no-preload-971041" [083b50e1-e9fa-4e4b-a684-a6d461c7b45e] Running
	I0719 19:40:41.062573   68019 system_pods.go:61] "kube-controller-manager-no-preload-971041" [45ee5801-3b8a-4072-a4c6-716525a06d87] Running
	I0719 19:40:41.062576   68019 system_pods.go:61] "kube-proxy-445wh" [468d052a-a8d2-47f6-9054-813dd3ee33ee] Running
	I0719 19:40:41.062579   68019 system_pods.go:61] "kube-scheduler-no-preload-971041" [3989f412-bfc8-41c2-a9df-ef2b72f95f37] Running
	I0719 19:40:41.062584   68019 system_pods.go:61] "metrics-server-78fcd8795b-r6hdb" [13d26205-03d4-4719-bfa0-199a2e23b52a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:40:41.062588   68019 system_pods.go:61] "storage-provisioner" [fedb9d1a-139b-4627-8efa-64dd1481d8d4] Running
	I0719 19:40:41.062595   68019 system_pods.go:74] duration metric: took 182.903201ms to wait for pod list to return data ...
	I0719 19:40:41.062601   68019 default_sa.go:34] waiting for default service account to be created ...
	I0719 19:40:41.259408   68019 default_sa.go:45] found service account: "default"
	I0719 19:40:41.259432   68019 default_sa.go:55] duration metric: took 196.824864ms for default service account to be created ...
	I0719 19:40:41.259440   68019 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 19:40:41.462209   68019 system_pods.go:86] 9 kube-system pods found
	I0719 19:40:41.462243   68019 system_pods.go:89] "coredns-5cfdc65f69-7kdqd" [1b0eeec5-f993-4d90-a3e0-5254bcc39f64] Running
	I0719 19:40:41.462251   68019 system_pods.go:89] "coredns-5cfdc65f69-d5xr6" [5f5a96c3-84ef-429c-b31d-472fcc247862] Running
	I0719 19:40:41.462258   68019 system_pods.go:89] "etcd-no-preload-971041" [8852dc62-d441-43c0-97e5-6fb333ee3d26] Running
	I0719 19:40:41.462264   68019 system_pods.go:89] "kube-apiserver-no-preload-971041" [083b50e1-e9fa-4e4b-a684-a6d461c7b45e] Running
	I0719 19:40:41.462271   68019 system_pods.go:89] "kube-controller-manager-no-preload-971041" [45ee5801-3b8a-4072-a4c6-716525a06d87] Running
	I0719 19:40:41.462276   68019 system_pods.go:89] "kube-proxy-445wh" [468d052a-a8d2-47f6-9054-813dd3ee33ee] Running
	I0719 19:40:41.462282   68019 system_pods.go:89] "kube-scheduler-no-preload-971041" [3989f412-bfc8-41c2-a9df-ef2b72f95f37] Running
	I0719 19:40:41.462291   68019 system_pods.go:89] "metrics-server-78fcd8795b-r6hdb" [13d26205-03d4-4719-bfa0-199a2e23b52a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 19:40:41.462295   68019 system_pods.go:89] "storage-provisioner" [fedb9d1a-139b-4627-8efa-64dd1481d8d4] Running
	I0719 19:40:41.462304   68019 system_pods.go:126] duration metric: took 202.857782ms to wait for k8s-apps to be running ...
	I0719 19:40:41.462312   68019 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 19:40:41.462377   68019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:40:41.478453   68019 system_svc.go:56] duration metric: took 16.131704ms WaitForService to wait for kubelet
	I0719 19:40:41.478487   68019 kubeadm.go:582] duration metric: took 9.990516808s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 19:40:41.478510   68019 node_conditions.go:102] verifying NodePressure condition ...
	I0719 19:40:41.660126   68019 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 19:40:41.660154   68019 node_conditions.go:123] node cpu capacity is 2
	I0719 19:40:41.660165   68019 node_conditions.go:105] duration metric: took 181.649686ms to run NodePressure ...
	I0719 19:40:41.660176   68019 start.go:241] waiting for startup goroutines ...
	I0719 19:40:41.660182   68019 start.go:246] waiting for cluster config update ...
	I0719 19:40:41.660192   68019 start.go:255] writing updated cluster config ...
	I0719 19:40:41.660455   68019 ssh_runner.go:195] Run: rm -f paused
	I0719 19:40:41.707917   68019 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0719 19:40:41.710102   68019 out.go:177] * Done! kubectl is now configured to use "no-preload-971041" cluster and "default" namespace by default
	I0719 19:41:08.619807   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:41:08.620131   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:41:08.620145   68995 kubeadm.go:310] 
	I0719 19:41:08.620196   68995 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0719 19:41:08.620252   68995 kubeadm.go:310] 		timed out waiting for the condition
	I0719 19:41:08.620262   68995 kubeadm.go:310] 
	I0719 19:41:08.620302   68995 kubeadm.go:310] 	This error is likely caused by:
	I0719 19:41:08.620401   68995 kubeadm.go:310] 		- The kubelet is not running
	I0719 19:41:08.620615   68995 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0719 19:41:08.620632   68995 kubeadm.go:310] 
	I0719 19:41:08.620765   68995 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0719 19:41:08.620824   68995 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0719 19:41:08.620872   68995 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0719 19:41:08.620882   68995 kubeadm.go:310] 
	I0719 19:41:08.621033   68995 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0719 19:41:08.621168   68995 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0719 19:41:08.621179   68995 kubeadm.go:310] 
	I0719 19:41:08.621349   68995 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0719 19:41:08.621469   68995 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0719 19:41:08.621576   68995 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0719 19:41:08.621664   68995 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0719 19:41:08.621677   68995 kubeadm.go:310] 
	I0719 19:41:08.622038   68995 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 19:41:08.622184   68995 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0719 19:41:08.622310   68995 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0719 19:41:08.622452   68995 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0719 19:41:08.622521   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 19:41:09.080325   68995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:41:09.095325   68995 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 19:41:09.104753   68995 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 19:41:09.104771   68995 kubeadm.go:157] found existing configuration files:
	
	I0719 19:41:09.104818   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 19:41:09.113315   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 19:41:09.113375   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 19:41:09.122025   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 19:41:09.130333   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 19:41:09.130390   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 19:41:09.139159   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 19:41:09.147867   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 19:41:09.147913   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 19:41:09.156426   68995 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 19:41:09.165249   68995 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 19:41:09.165299   68995 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 19:41:09.174271   68995 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 19:41:09.379598   68995 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 19:43:05.534759   68995 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0719 19:43:05.534861   68995 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0719 19:43:05.536588   68995 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0719 19:43:05.536645   68995 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 19:43:05.536728   68995 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 19:43:05.536857   68995 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 19:43:05.536952   68995 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 19:43:05.537043   68995 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 19:43:05.538861   68995 out.go:204]   - Generating certificates and keys ...
	I0719 19:43:05.538966   68995 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 19:43:05.539081   68995 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 19:43:05.539168   68995 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 19:43:05.539245   68995 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 19:43:05.539349   68995 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 19:43:05.539395   68995 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 19:43:05.539453   68995 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 19:43:05.539504   68995 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 19:43:05.539578   68995 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 19:43:05.539668   68995 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 19:43:05.539716   68995 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 19:43:05.539763   68995 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 19:43:05.539812   68995 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 19:43:05.539885   68995 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 19:43:05.539968   68995 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 19:43:05.540022   68995 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 19:43:05.540135   68995 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 19:43:05.540239   68995 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 19:43:05.540291   68995 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 19:43:05.540376   68995 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 19:43:05.542005   68995 out.go:204]   - Booting up control plane ...
	I0719 19:43:05.542089   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 19:43:05.542159   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 19:43:05.542219   68995 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 19:43:05.542290   68995 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 19:43:05.542442   68995 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 19:43:05.542510   68995 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0719 19:43:05.542610   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:43:05.542826   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:43:05.542939   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:43:05.543171   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:43:05.543252   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:43:05.543470   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:43:05.543578   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:43:05.543759   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:43:05.543891   68995 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 19:43:05.544130   68995 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 19:43:05.544144   68995 kubeadm.go:310] 
	I0719 19:43:05.544185   68995 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0719 19:43:05.544237   68995 kubeadm.go:310] 		timed out waiting for the condition
	I0719 19:43:05.544252   68995 kubeadm.go:310] 
	I0719 19:43:05.544286   68995 kubeadm.go:310] 	This error is likely caused by:
	I0719 19:43:05.544316   68995 kubeadm.go:310] 		- The kubelet is not running
	I0719 19:43:05.544416   68995 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0719 19:43:05.544423   68995 kubeadm.go:310] 
	I0719 19:43:05.544574   68995 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0719 19:43:05.544628   68995 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0719 19:43:05.544675   68995 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0719 19:43:05.544683   68995 kubeadm.go:310] 
	I0719 19:43:05.544816   68995 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0719 19:43:05.544917   68995 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0719 19:43:05.544926   68995 kubeadm.go:310] 
	I0719 19:43:05.545025   68995 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0719 19:43:05.545099   68995 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0719 19:43:05.545183   68995 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0719 19:43:05.545275   68995 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0719 19:43:05.545312   68995 kubeadm.go:310] 
	I0719 19:43:05.545346   68995 kubeadm.go:394] duration metric: took 8m2.050412966s to StartCluster
	I0719 19:43:05.545390   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 19:43:05.545442   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 19:43:05.585066   68995 cri.go:89] found id: ""
	I0719 19:43:05.585101   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.585114   68995 logs.go:278] No container was found matching "kube-apiserver"
	I0719 19:43:05.585123   68995 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 19:43:05.585197   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 19:43:05.619270   68995 cri.go:89] found id: ""
	I0719 19:43:05.619298   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.619308   68995 logs.go:278] No container was found matching "etcd"
	I0719 19:43:05.619316   68995 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 19:43:05.619378   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 19:43:05.652886   68995 cri.go:89] found id: ""
	I0719 19:43:05.652924   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.652935   68995 logs.go:278] No container was found matching "coredns"
	I0719 19:43:05.652943   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 19:43:05.653004   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 19:43:05.687430   68995 cri.go:89] found id: ""
	I0719 19:43:05.687464   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.687475   68995 logs.go:278] No container was found matching "kube-scheduler"
	I0719 19:43:05.687482   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 19:43:05.687542   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 19:43:05.724176   68995 cri.go:89] found id: ""
	I0719 19:43:05.724207   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.724218   68995 logs.go:278] No container was found matching "kube-proxy"
	I0719 19:43:05.724226   68995 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 19:43:05.724286   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 19:43:05.757047   68995 cri.go:89] found id: ""
	I0719 19:43:05.757070   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.757077   68995 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 19:43:05.757083   68995 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 19:43:05.757131   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 19:43:05.799705   68995 cri.go:89] found id: ""
	I0719 19:43:05.799733   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.799743   68995 logs.go:278] No container was found matching "kindnet"
	I0719 19:43:05.799751   68995 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 19:43:05.799811   68995 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 19:43:05.844622   68995 cri.go:89] found id: ""
	I0719 19:43:05.844647   68995 logs.go:276] 0 containers: []
	W0719 19:43:05.844655   68995 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 19:43:05.844665   68995 logs.go:123] Gathering logs for CRI-O ...
	I0719 19:43:05.844676   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 19:43:05.962843   68995 logs.go:123] Gathering logs for container status ...
	I0719 19:43:05.962880   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 19:43:06.010157   68995 logs.go:123] Gathering logs for kubelet ...
	I0719 19:43:06.010191   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 19:43:06.060738   68995 logs.go:123] Gathering logs for dmesg ...
	I0719 19:43:06.060771   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 19:43:06.075609   68995 logs.go:123] Gathering logs for describe nodes ...
	I0719 19:43:06.075650   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 19:43:06.155814   68995 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0719 19:43:06.155872   68995 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0719 19:43:06.155906   68995 out.go:239] * 
	W0719 19:43:06.155980   68995 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0719 19:43:06.156013   68995 out.go:239] * 
	W0719 19:43:06.156982   68995 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 19:43:06.160611   68995 out.go:177] 
	W0719 19:43:06.161797   68995 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0719 19:43:06.161854   68995 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0719 19:43:06.161873   68995 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0719 19:43:06.163354   68995 out.go:177] 
	
	
	==> CRI-O <==
	Jul 19 19:54:48 old-k8s-version-570369 crio[650]: time="2024-07-19 19:54:48.407542737Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721418888407496490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=306c1340-9f57-4a62-8899-6cbc572b7003 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:54:48 old-k8s-version-570369 crio[650]: time="2024-07-19 19:54:48.408126369Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f6128652-00ed-4f24-991f-069f54960c6c name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:54:48 old-k8s-version-570369 crio[650]: time="2024-07-19 19:54:48.408193680Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f6128652-00ed-4f24-991f-069f54960c6c name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:54:48 old-k8s-version-570369 crio[650]: time="2024-07-19 19:54:48.408232448Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f6128652-00ed-4f24-991f-069f54960c6c name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:54:48 old-k8s-version-570369 crio[650]: time="2024-07-19 19:54:48.439630259Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=19237f6a-d05f-4160-94e7-3a4184731c46 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:54:48 old-k8s-version-570369 crio[650]: time="2024-07-19 19:54:48.439716737Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=19237f6a-d05f-4160-94e7-3a4184731c46 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:54:48 old-k8s-version-570369 crio[650]: time="2024-07-19 19:54:48.440883199Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4300ff5f-8521-402e-9919-c5c80c8edf58 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:54:48 old-k8s-version-570369 crio[650]: time="2024-07-19 19:54:48.441347419Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721418888441304131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4300ff5f-8521-402e-9919-c5c80c8edf58 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:54:48 old-k8s-version-570369 crio[650]: time="2024-07-19 19:54:48.441948427Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=389e007d-c899-4137-ab05-13ddc1bfc7b5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:54:48 old-k8s-version-570369 crio[650]: time="2024-07-19 19:54:48.442006115Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=389e007d-c899-4137-ab05-13ddc1bfc7b5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:54:48 old-k8s-version-570369 crio[650]: time="2024-07-19 19:54:48.442040681Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=389e007d-c899-4137-ab05-13ddc1bfc7b5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:54:48 old-k8s-version-570369 crio[650]: time="2024-07-19 19:54:48.475596535Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a2f317d3-1a4d-4370-b5c9-ff0633b95cd4 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:54:48 old-k8s-version-570369 crio[650]: time="2024-07-19 19:54:48.475689702Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a2f317d3-1a4d-4370-b5c9-ff0633b95cd4 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:54:48 old-k8s-version-570369 crio[650]: time="2024-07-19 19:54:48.476638073Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=53e0cc31-40b3-4e4b-a3a3-f7c263569b89 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:54:48 old-k8s-version-570369 crio[650]: time="2024-07-19 19:54:48.477072924Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721418888477044741,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=53e0cc31-40b3-4e4b-a3a3-f7c263569b89 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:54:48 old-k8s-version-570369 crio[650]: time="2024-07-19 19:54:48.477528081Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a5116214-782b-4ce2-b5d8-ceb898c0c257 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:54:48 old-k8s-version-570369 crio[650]: time="2024-07-19 19:54:48.477593924Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a5116214-782b-4ce2-b5d8-ceb898c0c257 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:54:48 old-k8s-version-570369 crio[650]: time="2024-07-19 19:54:48.477636820Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a5116214-782b-4ce2-b5d8-ceb898c0c257 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:54:48 old-k8s-version-570369 crio[650]: time="2024-07-19 19:54:48.506353652Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bce79d2a-7b58-4ef4-9c5d-41accfa50580 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:54:48 old-k8s-version-570369 crio[650]: time="2024-07-19 19:54:48.506474611Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bce79d2a-7b58-4ef4-9c5d-41accfa50580 name=/runtime.v1.RuntimeService/Version
	Jul 19 19:54:48 old-k8s-version-570369 crio[650]: time="2024-07-19 19:54:48.507458112Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=255bc0f5-8d85-419c-9134-a04b8f618934 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:54:48 old-k8s-version-570369 crio[650]: time="2024-07-19 19:54:48.507849412Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721418888507822812,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=255bc0f5-8d85-419c-9134-a04b8f618934 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 19:54:48 old-k8s-version-570369 crio[650]: time="2024-07-19 19:54:48.508335625Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=74afd8e1-16d2-43d2-acb8-4fa68fe24d3a name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:54:48 old-k8s-version-570369 crio[650]: time="2024-07-19 19:54:48.508441461Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=74afd8e1-16d2-43d2-acb8-4fa68fe24d3a name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 19:54:48 old-k8s-version-570369 crio[650]: time="2024-07-19 19:54:48.508479572Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=74afd8e1-16d2-43d2-acb8-4fa68fe24d3a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul19 19:34] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051232] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041151] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.651686] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.772958] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.529588] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.839428] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.060470] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070691] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.149713] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.157817] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.254351] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[Jul19 19:35] systemd-fstab-generator[833]: Ignoring "noauto" option for root device
	[  +0.082478] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.954850] systemd-fstab-generator[956]: Ignoring "noauto" option for root device
	[ +10.480938] kauditd_printk_skb: 46 callbacks suppressed
	[Jul19 19:39] systemd-fstab-generator[5050]: Ignoring "noauto" option for root device
	[Jul19 19:41] systemd-fstab-generator[5329]: Ignoring "noauto" option for root device
	[  +0.064414] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:54:48 up 20 min,  0 users,  load average: 0.08, 0.03, 0.03
	Linux old-k8s-version-570369 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 19 19:54:47 old-k8s-version-570369 kubelet[6844]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Jul 19 19:54:47 old-k8s-version-570369 kubelet[6844]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000271ec0, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc0008fca50, 0x24, 0x0, ...)
	Jul 19 19:54:47 old-k8s-version-570369 kubelet[6844]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Jul 19 19:54:47 old-k8s-version-570369 kubelet[6844]: net.(*Dialer).DialContext(0xc000034fc0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0008fca50, 0x24, 0x0, 0x0, 0x0, ...)
	Jul 19 19:54:47 old-k8s-version-570369 kubelet[6844]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Jul 19 19:54:47 old-k8s-version-570369 kubelet[6844]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc0008cc660, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0008fca50, 0x24, 0x60, 0x7fec8d67db00, 0x118, ...)
	Jul 19 19:54:47 old-k8s-version-570369 kubelet[6844]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Jul 19 19:54:47 old-k8s-version-570369 kubelet[6844]: net/http.(*Transport).dial(0xc00055d400, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0008fca50, 0x24, 0x0, 0xc0004d66a0, 0x7fec8d6dbda8, ...)
	Jul 19 19:54:47 old-k8s-version-570369 kubelet[6844]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Jul 19 19:54:47 old-k8s-version-570369 kubelet[6844]: net/http.(*Transport).dialConn(0xc00055d400, 0x4f7fe00, 0xc000052030, 0x0, 0xc000905380, 0x5, 0xc0008fca50, 0x24, 0x0, 0xc0008b7b00, ...)
	Jul 19 19:54:47 old-k8s-version-570369 kubelet[6844]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Jul 19 19:54:47 old-k8s-version-570369 kubelet[6844]: net/http.(*Transport).dialConnFor(0xc00055d400, 0xc0008c8630)
	Jul 19 19:54:47 old-k8s-version-570369 kubelet[6844]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Jul 19 19:54:47 old-k8s-version-570369 kubelet[6844]: created by net/http.(*Transport).queueForDial
	Jul 19 19:54:47 old-k8s-version-570369 kubelet[6844]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Jul 19 19:54:47 old-k8s-version-570369 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 19 19:54:47 old-k8s-version-570369 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 19 19:54:47 old-k8s-version-570369 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 142.
	Jul 19 19:54:47 old-k8s-version-570369 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 19 19:54:47 old-k8s-version-570369 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 19 19:54:47 old-k8s-version-570369 kubelet[6871]: I0719 19:54:47.862348    6871 server.go:416] Version: v1.20.0
	Jul 19 19:54:47 old-k8s-version-570369 kubelet[6871]: I0719 19:54:47.862841    6871 server.go:837] Client rotation is on, will bootstrap in background
	Jul 19 19:54:47 old-k8s-version-570369 kubelet[6871]: I0719 19:54:47.865531    6871 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 19 19:54:47 old-k8s-version-570369 kubelet[6871]: W0719 19:54:47.866483    6871 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 19 19:54:47 old-k8s-version-570369 kubelet[6871]: I0719 19:54:47.866848    6871 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-570369 -n old-k8s-version-570369
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-570369 -n old-k8s-version-570369: exit status 2 (226.441616ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-570369" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (157.09s)

                                                
                                    

Test pass (257/326)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 23.76
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.05
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.3/json-events 11.76
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.06
18 TestDownloadOnly/v1.30.3/DeleteAll 0.13
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.31.0-beta.0/json-events 49.46
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.06
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.13
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.12
30 TestBinaryMirror 0.55
31 TestOffline 118.22
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
36 TestAddons/Setup 139.8
38 TestAddons/parallel/Registry 20.13
40 TestAddons/parallel/InspektorGadget 11.39
42 TestAddons/parallel/HelmTiller 10.97
44 TestAddons/parallel/CSI 65.94
45 TestAddons/parallel/Headlamp 16.09
46 TestAddons/parallel/CloudSpanner 6.87
47 TestAddons/parallel/LocalPath 61.14
48 TestAddons/parallel/NvidiaDevicePlugin 6.8
49 TestAddons/parallel/Yakd 6.01
53 TestAddons/serial/GCPAuth/Namespaces 0.12
55 TestCertOptions 45.66
56 TestCertExpiration 295.35
58 TestForceSystemdFlag 42.27
59 TestForceSystemdEnv 41.82
61 TestKVMDriverInstallOrUpdate 16.41
65 TestErrorSpam/setup 41.08
66 TestErrorSpam/start 0.32
67 TestErrorSpam/status 0.7
68 TestErrorSpam/pause 1.49
69 TestErrorSpam/unpause 1.52
70 TestErrorSpam/stop 4.82
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 83.43
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 39.94
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.08
81 TestFunctional/serial/CacheCmd/cache/add_remote 3.71
82 TestFunctional/serial/CacheCmd/cache/add_local 2.03
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
84 TestFunctional/serial/CacheCmd/cache/list 0.04
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.62
87 TestFunctional/serial/CacheCmd/cache/delete 0.09
88 TestFunctional/serial/MinikubeKubectlCmd 0.1
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
90 TestFunctional/serial/ExtraConfig 32.55
91 TestFunctional/serial/ComponentHealth 0.06
92 TestFunctional/serial/LogsCmd 1.29
93 TestFunctional/serial/LogsFileCmd 1.35
94 TestFunctional/serial/InvalidService 4.3
96 TestFunctional/parallel/ConfigCmd 0.29
97 TestFunctional/parallel/DashboardCmd 33.62
98 TestFunctional/parallel/DryRun 0.26
99 TestFunctional/parallel/InternationalLanguage 0.13
100 TestFunctional/parallel/StatusCmd 0.96
104 TestFunctional/parallel/ServiceCmdConnect 10.58
105 TestFunctional/parallel/AddonsCmd 0.11
106 TestFunctional/parallel/PersistentVolumeClaim 44.9
108 TestFunctional/parallel/SSHCmd 0.37
109 TestFunctional/parallel/CpCmd 1.24
110 TestFunctional/parallel/MySQL 21.76
111 TestFunctional/parallel/FileSync 0.22
112 TestFunctional/parallel/CertSync 1.28
116 TestFunctional/parallel/NodeLabels 0.07
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.39
120 TestFunctional/parallel/License 0.59
121 TestFunctional/parallel/Version/short 0.05
122 TestFunctional/parallel/Version/components 0.58
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
127 TestFunctional/parallel/ImageCommands/ImageBuild 5.61
128 TestFunctional/parallel/ImageCommands/Setup 1.79
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
132 TestFunctional/parallel/ServiceCmd/DeployApp 11.16
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.62
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.84
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.69
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.49
146 TestFunctional/parallel/ImageCommands/ImageRemove 0.41
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.77
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.81
149 TestFunctional/parallel/ProfileCmd/profile_not_create 0.3
150 TestFunctional/parallel/ProfileCmd/profile_list 0.29
151 TestFunctional/parallel/ProfileCmd/profile_json_output 0.27
152 TestFunctional/parallel/MountCmd/any-port 20.87
153 TestFunctional/parallel/ServiceCmd/List 0.34
154 TestFunctional/parallel/ServiceCmd/JSONOutput 0.29
155 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
156 TestFunctional/parallel/ServiceCmd/Format 0.76
157 TestFunctional/parallel/ServiceCmd/URL 0.28
158 TestFunctional/parallel/MountCmd/specific-port 1.6
159 TestFunctional/parallel/MountCmd/VerifyCleanup 1.37
160 TestFunctional/delete_echo-server_images 0.03
161 TestFunctional/delete_my-image_image 0.01
162 TestFunctional/delete_minikube_cached_images 0.01
166 TestMultiControlPlane/serial/StartCluster 206.22
167 TestMultiControlPlane/serial/DeployApp 6.47
168 TestMultiControlPlane/serial/PingHostFromPods 1.14
169 TestMultiControlPlane/serial/AddWorkerNode 56.57
170 TestMultiControlPlane/serial/NodeLabels 0.07
171 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.53
172 TestMultiControlPlane/serial/CopyFile 12.35
174 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.47
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.4
178 TestMultiControlPlane/serial/DeleteSecondaryNode 17.01
179 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.36
181 TestMultiControlPlane/serial/RestartCluster 349.8
182 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.38
183 TestMultiControlPlane/serial/AddSecondaryNode 78.13
184 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.5
188 TestJSONOutput/start/Command 51.58
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.68
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.61
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 10.37
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.18
216 TestMainNoArgs 0.04
217 TestMinikubeProfile 83.99
220 TestMountStart/serial/StartWithMountFirst 26.44
221 TestMountStart/serial/VerifyMountFirst 0.35
222 TestMountStart/serial/StartWithMountSecond 25
223 TestMountStart/serial/VerifyMountSecond 0.36
224 TestMountStart/serial/DeleteFirst 0.69
225 TestMountStart/serial/VerifyMountPostDelete 0.36
226 TestMountStart/serial/Stop 1.26
227 TestMountStart/serial/RestartStopped 22.99
228 TestMountStart/serial/VerifyMountPostStop 0.35
231 TestMultiNode/serial/FreshStart2Nodes 116.3
232 TestMultiNode/serial/DeployApp2Nodes 5.06
233 TestMultiNode/serial/PingHostFrom2Pods 0.73
234 TestMultiNode/serial/AddNode 52.46
235 TestMultiNode/serial/MultiNodeLabels 0.06
236 TestMultiNode/serial/ProfileList 0.21
237 TestMultiNode/serial/CopyFile 6.84
238 TestMultiNode/serial/StopNode 2.15
239 TestMultiNode/serial/StartAfterStop 39.36
241 TestMultiNode/serial/DeleteNode 2.26
243 TestMultiNode/serial/RestartMultiNode 186.69
244 TestMultiNode/serial/ValidateNameConflict 42.76
251 TestScheduledStopUnix 110.54
255 TestRunningBinaryUpgrade 181.17
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
261 TestNoKubernetes/serial/StartWithK8s 108.46
262 TestNoKubernetes/serial/StartWithStopK8s 18.52
263 TestNoKubernetes/serial/Start 36.55
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
265 TestNoKubernetes/serial/ProfileList 1.06
266 TestNoKubernetes/serial/Stop 1.29
267 TestNoKubernetes/serial/StartNoArgs 42.21
268 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
269 TestStoppedBinaryUpgrade/Setup 2.39
270 TestStoppedBinaryUpgrade/Upgrade 147.53
285 TestNetworkPlugins/group/false 3.27
290 TestPause/serial/Start 57.56
291 TestStoppedBinaryUpgrade/MinikubeLogs 0.96
295 TestStartStop/group/no-preload/serial/FirstStart 98.68
296 TestPause/serial/SecondStartNoReconfiguration 68.79
298 TestStartStop/group/embed-certs/serial/FirstStart 115.17
299 TestPause/serial/Pause 0.94
300 TestPause/serial/VerifyStatus 0.24
301 TestPause/serial/Unpause 0.74
302 TestPause/serial/PauseAgain 0.87
303 TestPause/serial/DeletePaused 0.88
304 TestPause/serial/VerifyDeletedResources 0.67
306 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 53.66
307 TestStartStop/group/no-preload/serial/DeployApp 10.29
308 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.93
310 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.28
311 TestStartStop/group/embed-certs/serial/DeployApp 11.28
312 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.93
314 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.9
317 TestStartStop/group/no-preload/serial/SecondStart 684.17
322 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 613.64
323 TestStartStop/group/embed-certs/serial/SecondStart 546.38
324 TestStartStop/group/old-k8s-version/serial/Stop 4.28
325 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
336 TestStartStop/group/newest-cni/serial/FirstStart 44.97
337 TestNetworkPlugins/group/auto/Start 74.4
338 TestNetworkPlugins/group/kindnet/Start 83.32
339 TestStartStop/group/newest-cni/serial/DeployApp 0
340 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.15
341 TestStartStop/group/newest-cni/serial/Stop 7.83
342 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 1.43
343 TestStartStop/group/newest-cni/serial/SecondStart 52.63
344 TestNetworkPlugins/group/auto/KubeletFlags 0.22
345 TestNetworkPlugins/group/auto/NetCatPod 10.31
346 TestNetworkPlugins/group/auto/DNS 0.2
347 TestNetworkPlugins/group/auto/Localhost 0.13
348 TestNetworkPlugins/group/auto/HairPin 0.15
349 TestNetworkPlugins/group/calico/Start 93.5
350 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
352 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
353 TestStartStop/group/newest-cni/serial/Pause 2.83
354 TestNetworkPlugins/group/custom-flannel/Start 97.99
355 TestNetworkPlugins/group/enable-default-cni/Start 103.57
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
358 TestNetworkPlugins/group/kindnet/NetCatPod 10.24
359 TestNetworkPlugins/group/kindnet/DNS 0.17
360 TestNetworkPlugins/group/kindnet/Localhost 0.13
361 TestNetworkPlugins/group/kindnet/HairPin 0.13
362 TestNetworkPlugins/group/flannel/Start 105.54
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/calico/KubeletFlags 0.2
365 TestNetworkPlugins/group/calico/NetCatPod 10.22
366 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
367 TestNetworkPlugins/group/custom-flannel/NetCatPod 14.26
368 TestNetworkPlugins/group/calico/DNS 0.18
369 TestNetworkPlugins/group/calico/Localhost 0.19
370 TestNetworkPlugins/group/calico/HairPin 0.22
371 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.21
372 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.23
373 TestNetworkPlugins/group/custom-flannel/DNS 0.21
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
376 TestNetworkPlugins/group/bridge/Start 60.67
377 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
378 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
379 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
380 TestNetworkPlugins/group/flannel/ControllerPod 6
381 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
382 TestNetworkPlugins/group/flannel/NetCatPod 10.22
383 TestNetworkPlugins/group/flannel/DNS 0.16
384 TestNetworkPlugins/group/flannel/Localhost 0.13
385 TestNetworkPlugins/group/flannel/HairPin 0.12
386 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
387 TestNetworkPlugins/group/bridge/NetCatPod 10.22
388 TestNetworkPlugins/group/bridge/DNS 0.14
389 TestNetworkPlugins/group/bridge/Localhost 0.12
390 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.20.0/json-events (23.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-928056 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-928056 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (23.763808681s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (23.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-928056
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-928056: exit status 85 (54.558598ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-928056 | jenkins | v1.33.1 | 19 Jul 24 18:13 UTC |          |
	|         | -p download-only-928056        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 18:13:45
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 18:13:45.805697   22040 out.go:291] Setting OutFile to fd 1 ...
	I0719 18:13:45.805790   22040 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:13:45.805797   22040 out.go:304] Setting ErrFile to fd 2...
	I0719 18:13:45.805801   22040 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:13:45.805972   22040 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	W0719 18:13:45.806072   22040 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19307-14841/.minikube/config/config.json: open /home/jenkins/minikube-integration/19307-14841/.minikube/config/config.json: no such file or directory
	I0719 18:13:45.806627   22040 out.go:298] Setting JSON to true
	I0719 18:13:45.807455   22040 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3369,"bootTime":1721409457,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 18:13:45.807512   22040 start.go:139] virtualization: kvm guest
	I0719 18:13:45.809948   22040 out.go:97] [download-only-928056] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0719 18:13:45.810054   22040 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball: no such file or directory
	I0719 18:13:45.810110   22040 notify.go:220] Checking for updates...
	I0719 18:13:45.811428   22040 out.go:169] MINIKUBE_LOCATION=19307
	I0719 18:13:45.812778   22040 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 18:13:45.814042   22040 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 18:13:45.815285   22040 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 18:13:45.816515   22040 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0719 18:13:45.818873   22040 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0719 18:13:45.819115   22040 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 18:13:45.916741   22040 out.go:97] Using the kvm2 driver based on user configuration
	I0719 18:13:45.916776   22040 start.go:297] selected driver: kvm2
	I0719 18:13:45.916791   22040 start.go:901] validating driver "kvm2" against <nil>
	I0719 18:13:45.917108   22040 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 18:13:45.917238   22040 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19307-14841/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 18:13:45.931644   22040 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 18:13:45.931689   22040 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 18:13:45.932189   22040 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0719 18:13:45.932416   22040 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 18:13:45.932445   22040 cni.go:84] Creating CNI manager for ""
	I0719 18:13:45.932455   22040 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 18:13:45.932468   22040 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 18:13:45.932544   22040 start.go:340] cluster config:
	{Name:download-only-928056 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-928056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 18:13:45.932848   22040 iso.go:125] acquiring lock: {Name:mka126a3710ab8a115eb2a3299b735524430e5f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 18:13:45.934539   22040 out.go:97] Downloading VM boot image ...
	I0719 18:13:45.934590   22040 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19307-14841/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 18:13:54.804609   22040 out.go:97] Starting "download-only-928056" primary control-plane node in "download-only-928056" cluster
	I0719 18:13:54.804638   22040 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 18:13:54.905757   22040 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0719 18:13:54.905797   22040 cache.go:56] Caching tarball of preloaded images
	I0719 18:13:54.905947   22040 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 18:13:54.908009   22040 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0719 18:13:54.908029   22040 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0719 18:13:55.008570   22040 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-928056 host does not exist
	  To start a cluster, run: "minikube start -p download-only-928056"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-928056
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (11.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-106215 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-106215 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (11.757469352s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (11.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-106215
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-106215: exit status 85 (54.865229ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-928056 | jenkins | v1.33.1 | 19 Jul 24 18:13 UTC |                     |
	|         | -p download-only-928056        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 19 Jul 24 18:14 UTC | 19 Jul 24 18:14 UTC |
	| delete  | -p download-only-928056        | download-only-928056 | jenkins | v1.33.1 | 19 Jul 24 18:14 UTC | 19 Jul 24 18:14 UTC |
	| start   | -o=json --download-only        | download-only-106215 | jenkins | v1.33.1 | 19 Jul 24 18:14 UTC |                     |
	|         | -p download-only-106215        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 18:14:09
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 18:14:09.868255   22296 out.go:291] Setting OutFile to fd 1 ...
	I0719 18:14:09.868484   22296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:14:09.868491   22296 out.go:304] Setting ErrFile to fd 2...
	I0719 18:14:09.868496   22296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:14:09.868646   22296 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 18:14:09.869141   22296 out.go:298] Setting JSON to true
	I0719 18:14:09.869905   22296 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3393,"bootTime":1721409457,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 18:14:09.869955   22296 start.go:139] virtualization: kvm guest
	I0719 18:14:09.872351   22296 out.go:97] [download-only-106215] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 18:14:09.872491   22296 notify.go:220] Checking for updates...
	I0719 18:14:09.873763   22296 out.go:169] MINIKUBE_LOCATION=19307
	I0719 18:14:09.874970   22296 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 18:14:09.876231   22296 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 18:14:09.877593   22296 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 18:14:09.878764   22296 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0719 18:14:09.880912   22296 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0719 18:14:09.881111   22296 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 18:14:09.911958   22296 out.go:97] Using the kvm2 driver based on user configuration
	I0719 18:14:09.911983   22296 start.go:297] selected driver: kvm2
	I0719 18:14:09.911994   22296 start.go:901] validating driver "kvm2" against <nil>
	I0719 18:14:09.912300   22296 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 18:14:09.912375   22296 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19307-14841/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 18:14:09.926678   22296 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 18:14:09.926719   22296 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 18:14:09.927164   22296 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0719 18:14:09.927341   22296 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 18:14:09.927397   22296 cni.go:84] Creating CNI manager for ""
	I0719 18:14:09.927410   22296 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 18:14:09.927417   22296 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 18:14:09.927467   22296 start.go:340] cluster config:
	{Name:download-only-106215 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-106215 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 18:14:09.927551   22296 iso.go:125] acquiring lock: {Name:mka126a3710ab8a115eb2a3299b735524430e5f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 18:14:09.929042   22296 out.go:97] Starting "download-only-106215" primary control-plane node in "download-only-106215" cluster
	I0719 18:14:09.929059   22296 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 18:14:10.104966   22296 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 18:14:10.105008   22296 cache.go:56] Caching tarball of preloaded images
	I0719 18:14:10.105165   22296 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 18:14:10.107004   22296 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0719 18:14:10.107023   22296 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0719 18:14:10.211522   22296 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:15191286f02471d9b3ea0b587fcafc39 -> /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-106215 host does not exist
	  To start a cluster, run: "minikube start -p download-only-106215"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-106215
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (49.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-111401 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-111401 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (49.463673034s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (49.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-111401
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-111401: exit status 85 (55.464066ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-928056 | jenkins | v1.33.1 | 19 Jul 24 18:13 UTC |                     |
	|         | -p download-only-928056             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 19 Jul 24 18:14 UTC | 19 Jul 24 18:14 UTC |
	| delete  | -p download-only-928056             | download-only-928056 | jenkins | v1.33.1 | 19 Jul 24 18:14 UTC | 19 Jul 24 18:14 UTC |
	| start   | -o=json --download-only             | download-only-106215 | jenkins | v1.33.1 | 19 Jul 24 18:14 UTC |                     |
	|         | -p download-only-106215             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 19 Jul 24 18:14 UTC | 19 Jul 24 18:14 UTC |
	| delete  | -p download-only-106215             | download-only-106215 | jenkins | v1.33.1 | 19 Jul 24 18:14 UTC | 19 Jul 24 18:14 UTC |
	| start   | -o=json --download-only             | download-only-111401 | jenkins | v1.33.1 | 19 Jul 24 18:14 UTC |                     |
	|         | -p download-only-111401             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 18:14:21
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 18:14:21.938603   22505 out.go:291] Setting OutFile to fd 1 ...
	I0719 18:14:21.938824   22505 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:14:21.938831   22505 out.go:304] Setting ErrFile to fd 2...
	I0719 18:14:21.938836   22505 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:14:21.939002   22505 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 18:14:21.939532   22505 out.go:298] Setting JSON to true
	I0719 18:14:21.940352   22505 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3405,"bootTime":1721409457,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 18:14:21.940418   22505 start.go:139] virtualization: kvm guest
	I0719 18:14:21.942340   22505 out.go:97] [download-only-111401] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 18:14:21.942516   22505 notify.go:220] Checking for updates...
	I0719 18:14:21.943886   22505 out.go:169] MINIKUBE_LOCATION=19307
	I0719 18:14:21.945393   22505 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 18:14:21.946802   22505 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 18:14:21.948160   22505 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 18:14:21.949505   22505 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0719 18:14:21.951976   22505 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0719 18:14:21.952277   22505 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 18:14:21.983007   22505 out.go:97] Using the kvm2 driver based on user configuration
	I0719 18:14:21.983034   22505 start.go:297] selected driver: kvm2
	I0719 18:14:21.983047   22505 start.go:901] validating driver "kvm2" against <nil>
	I0719 18:14:21.983349   22505 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 18:14:21.983443   22505 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19307-14841/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 18:14:21.997648   22505 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 18:14:21.997704   22505 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 18:14:21.998154   22505 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0719 18:14:21.998300   22505 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 18:14:21.998348   22505 cni.go:84] Creating CNI manager for ""
	I0719 18:14:21.998360   22505 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 18:14:21.998367   22505 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 18:14:21.998420   22505 start.go:340] cluster config:
	{Name:download-only-111401 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-111401 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 18:14:21.998504   22505 iso.go:125] acquiring lock: {Name:mka126a3710ab8a115eb2a3299b735524430e5f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 18:14:22.000173   22505 out.go:97] Starting "download-only-111401" primary control-plane node in "download-only-111401" cluster
	I0719 18:14:22.000189   22505 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0719 18:14:22.516105   22505 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0719 18:14:22.516139   22505 cache.go:56] Caching tarball of preloaded images
	I0719 18:14:22.516304   22505 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0719 18:14:22.518234   22505 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0719 18:14:22.518262   22505 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0719 18:14:22.627329   22505 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:3743f5ddb63994a661f14e5a8d3af98c -> /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0719 18:14:40.613312   22505 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0719 18:14:40.613412   22505 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19307-14841/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0719 18:14:41.359575   22505 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0719 18:14:41.359930   22505 profile.go:143] Saving config to /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/download-only-111401/config.json ...
	I0719 18:14:41.359959   22505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/download-only-111401/config.json: {Name:mk194bce5e69f964e8a3e66bb8224669a1710dad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 18:14:41.360113   22505 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0719 18:14:41.360246   22505 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19307-14841/.minikube/cache/linux/amd64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-111401 host does not exist
	  To start a cluster, run: "minikube start -p download-only-111401"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-111401
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-679369 --alsologtostderr --binary-mirror http://127.0.0.1:37635 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-679369" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-679369
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestOffline (118.22s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-759970 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-759970 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m56.804683538s)
helpers_test.go:175: Cleaning up "offline-crio-759970" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-759970
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-759970: (1.412608667s)
--- PASS: TestOffline (118.22s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-990895
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-990895: exit status 85 (47.553312ms)

                                                
                                                
-- stdout --
	* Profile "addons-990895" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-990895"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-990895
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-990895: exit status 85 (48.474496ms)

                                                
                                                
-- stdout --
	* Profile "addons-990895" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-990895"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (139.8s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-990895 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-990895 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m19.803219963s)
--- PASS: TestAddons/Setup (139.80s)

                                                
                                    
x
+
TestAddons/parallel/Registry (20.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 19.90708ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-fvxpl" [db275727-fea8-4f8b-9836-c0f394f7e085] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005731892s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-5sm5z" [5e03528f-ac60-402a-8478-15cca52a26f9] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004679156s
addons_test.go:342: (dbg) Run:  kubectl --context addons-990895 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-990895 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-990895 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.335592665s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-990895 ip
2024/07/19 18:17:51 [DEBUG] GET http://192.168.39.239:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-990895 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (20.13s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.39s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-nz2lz" [313290a2-2592-43b6-886a-d8f400023548] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004156116s
addons_test.go:843: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-990895
addons_test.go:843: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-990895: (6.388311937s)
--- PASS: TestAddons/parallel/InspektorGadget (11.39s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.97s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.042258ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-rgpbq" [e4c9ebcd-0308-48aa-ad08-d6e84e6dbbe3] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005226622s
addons_test.go:475: (dbg) Run:  kubectl --context addons-990895 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-990895 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.405620417s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-990895 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.97s)

                                                
                                    
x
+
TestAddons/parallel/CSI (65.94s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 4.670609ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-990895 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-990895 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d8a2f514-4078-4fe3-a2f6-e447032b7f71] Pending
helpers_test.go:344: "task-pv-pod" [d8a2f514-4078-4fe3-a2f6-e447032b7f71] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [d8a2f514-4078-4fe3-a2f6-e447032b7f71] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003789565s
addons_test.go:586: (dbg) Run:  kubectl --context addons-990895 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-990895 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-990895 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-990895 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-990895 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-990895 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-990895 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [20aa39e0-5bf2-46e0-8e2b-d524397231b1] Pending
helpers_test.go:344: "task-pv-pod-restore" [20aa39e0-5bf2-46e0-8e2b-d524397231b1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [20aa39e0-5bf2-46e0-8e2b-d524397231b1] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005960015s
addons_test.go:628: (dbg) Run:  kubectl --context addons-990895 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-990895 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-990895 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-linux-amd64 -p addons-990895 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-linux-amd64 -p addons-990895 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.740850099s)
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-990895 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (65.94s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.09s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-990895 --alsologtostderr -v=1
addons_test.go:826: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-990895 --alsologtostderr -v=1: (1.08340611s)
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-ggx6p" [3efb6210-449e-46c2-b799-ac0a331ba22f] Pending
helpers_test.go:344: "headlamp-7867546754-ggx6p" [3efb6210-449e-46c2-b799-ac0a331ba22f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-ggx6p" [3efb6210-449e-46c2-b799-ac0a331ba22f] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.004726028s
--- PASS: TestAddons/parallel/Headlamp (16.09s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.87s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-cqglw" [fc8921d9-d889-4410-9784-b6c581ba1a98] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004215843s
addons_test.go:862: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-990895
--- PASS: TestAddons/parallel/CloudSpanner (6.87s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (61.14s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-990895 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-990895 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990895 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [848bc438-c87e-4a72-bc15-6a5c619702ea] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [848bc438-c87e-4a72-bc15-6a5c619702ea] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [848bc438-c87e-4a72-bc15-6a5c619702ea] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 11.004181994s
addons_test.go:992: (dbg) Run:  kubectl --context addons-990895 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-linux-amd64 -p addons-990895 ssh "cat /opt/local-path-provisioner/pvc-31780053-8619-4414-948c-908236bd874e_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-990895 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-990895 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-linux-amd64 -p addons-990895 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-linux-amd64 -p addons-990895 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.291496726s)
--- PASS: TestAddons/parallel/LocalPath (61.14s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.8s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-dpnm9" [a47bd84e-c957-4484-ad2b-20f5a4582ed9] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.006474918s
addons_test.go:1056: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-990895
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.80s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-z49l4" [e63c3dbb-e45b-4a70-9ab4-47bf7d57ed45] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004406527s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-990895 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-990895 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (45.66s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-403447 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-403447 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (44.213636047s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-403447 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-403447 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-403447 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-403447" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-403447
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-403447: (1.00832663s)
--- PASS: TestCertOptions (45.66s)

                                                
                                    
x
+
TestCertExpiration (295.35s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-411109 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
E0719 19:19:43.525004   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/functional-788436/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-411109 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m28.66584235s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-411109 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-411109 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (25.704090857s)
helpers_test.go:175: Cleaning up "cert-expiration-411109" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-411109
--- PASS: TestCertExpiration (295.35s)

                                                
                                    
x
+
TestForceSystemdFlag (42.27s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-316966 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-316966 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (41.078785795s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-316966 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-316966" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-316966
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-316966: (1.003497609s)
--- PASS: TestForceSystemdFlag (42.27s)

                                                
                                    
x
+
TestForceSystemdEnv (41.82s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-842470 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-842470 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (41.033659226s)
helpers_test.go:175: Cleaning up "force-systemd-env-842470" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-842470
--- PASS: TestForceSystemdEnv (41.82s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (16.41s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (16.41s)

                                                
                                    
x
+
TestErrorSpam/setup (41.08s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-143758 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-143758 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-143758 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-143758 --driver=kvm2  --container-runtime=crio: (41.07798018s)
--- PASS: TestErrorSpam/setup (41.08s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-143758 --log_dir /tmp/nospam-143758 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-143758 --log_dir /tmp/nospam-143758 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-143758 --log_dir /tmp/nospam-143758 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-143758 --log_dir /tmp/nospam-143758 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-143758 --log_dir /tmp/nospam-143758 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-143758 --log_dir /tmp/nospam-143758 status
--- PASS: TestErrorSpam/status (0.70s)

                                                
                                    
x
+
TestErrorSpam/pause (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-143758 --log_dir /tmp/nospam-143758 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-143758 --log_dir /tmp/nospam-143758 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-143758 --log_dir /tmp/nospam-143758 pause
--- PASS: TestErrorSpam/pause (1.49s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-143758 --log_dir /tmp/nospam-143758 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-143758 --log_dir /tmp/nospam-143758 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-143758 --log_dir /tmp/nospam-143758 unpause
--- PASS: TestErrorSpam/unpause (1.52s)

                                                
                                    
x
+
TestErrorSpam/stop (4.82s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-143758 --log_dir /tmp/nospam-143758 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-143758 --log_dir /tmp/nospam-143758 stop: (1.462326271s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-143758 --log_dir /tmp/nospam-143758 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-143758 --log_dir /tmp/nospam-143758 stop: (1.972662241s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-143758 --log_dir /tmp/nospam-143758 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-143758 --log_dir /tmp/nospam-143758 stop: (1.386607382s)
--- PASS: TestErrorSpam/stop (4.82s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19307-14841/.minikube/files/etc/test/nested/copy/22028/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (83.43s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-788436 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0719 18:27:32.427225   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
E0719 18:27:32.433083   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
E0719 18:27:32.443326   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
E0719 18:27:32.463600   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
E0719 18:27:32.503865   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
E0719 18:27:32.584197   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
E0719 18:27:32.744595   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
E0719 18:27:33.064980   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
E0719 18:27:33.705885   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
E0719 18:27:34.986414   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
E0719 18:27:37.547940   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
E0719 18:27:42.668319   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
E0719 18:27:52.909183   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
E0719 18:28:13.390254   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-788436 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m23.432234678s)
--- PASS: TestFunctional/serial/StartWithProxy (83.43s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.94s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-788436 --alsologtostderr -v=8
E0719 18:28:54.351113   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-788436 --alsologtostderr -v=8: (39.939268991s)
functional_test.go:659: soft start took 39.939776374s for "functional-788436" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.94s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-788436 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-788436 cache add registry.k8s.io/pause:3.1: (1.287070389s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-788436 cache add registry.k8s.io/pause:3.3: (1.229644256s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-788436 cache add registry.k8s.io/pause:latest: (1.195550601s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-788436 /tmp/TestFunctionalserialCacheCmdcacheadd_local1671102399/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 cache add minikube-local-cache-test:functional-788436
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-788436 cache add minikube-local-cache-test:functional-788436: (1.748478427s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 cache delete minikube-local-cache-test:functional-788436
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-788436
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-788436 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (201.179238ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 kubectl -- --context functional-788436 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-788436 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.55s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-788436 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-788436 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.548303338s)
functional_test.go:757: restart took 32.54841323s for "functional-788436" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.55s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-788436 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-788436 logs: (1.291578113s)
--- PASS: TestFunctional/serial/LogsCmd (1.29s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 logs --file /tmp/TestFunctionalserialLogsFileCmd1842643338/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-788436 logs --file /tmp/TestFunctionalserialLogsFileCmd1842643338/001/logs.txt: (1.346857478s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.3s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-788436 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-788436
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-788436: exit status 115 (258.528017ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.190:31565 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-788436 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.30s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-788436 config get cpus: exit status 14 (48.582497ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-788436 config get cpus: exit status 14 (47.895943ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (33.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-788436 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-788436 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 32125: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (33.62s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-788436 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-788436 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (134.192746ms)

                                                
                                                
-- stdout --
	* [functional-788436] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19307
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19307-14841/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19307-14841/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 18:29:54.963288   31745 out.go:291] Setting OutFile to fd 1 ...
	I0719 18:29:54.963786   31745 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:29:54.963802   31745 out.go:304] Setting ErrFile to fd 2...
	I0719 18:29:54.963811   31745 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:29:54.964217   31745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 18:29:54.965049   31745 out.go:298] Setting JSON to false
	I0719 18:29:54.966133   31745 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4338,"bootTime":1721409457,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 18:29:54.966206   31745 start.go:139] virtualization: kvm guest
	I0719 18:29:54.968094   31745 out.go:177] * [functional-788436] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 18:29:54.969289   31745 notify.go:220] Checking for updates...
	I0719 18:29:54.969292   31745 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 18:29:54.970503   31745 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 18:29:54.971891   31745 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 18:29:54.973128   31745 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 18:29:54.974335   31745 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 18:29:54.975420   31745 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 18:29:54.976837   31745 config.go:182] Loaded profile config "functional-788436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:29:54.977186   31745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:29:54.977239   31745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:29:54.992207   31745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40915
	I0719 18:29:54.992608   31745 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:29:54.993080   31745 main.go:141] libmachine: Using API Version  1
	I0719 18:29:54.993101   31745 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:29:54.993419   31745 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:29:54.993594   31745 main.go:141] libmachine: (functional-788436) Calling .DriverName
	I0719 18:29:54.993834   31745 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 18:29:54.994231   31745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:29:54.994271   31745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:29:55.008778   31745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36993
	I0719 18:29:55.009283   31745 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:29:55.009840   31745 main.go:141] libmachine: Using API Version  1
	I0719 18:29:55.009861   31745 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:29:55.010156   31745 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:29:55.010366   31745 main.go:141] libmachine: (functional-788436) Calling .DriverName
	I0719 18:29:55.041649   31745 out.go:177] * Using the kvm2 driver based on existing profile
	I0719 18:29:55.042949   31745 start.go:297] selected driver: kvm2
	I0719 18:29:55.042962   31745 start.go:901] validating driver "kvm2" against &{Name:functional-788436 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-788436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 18:29:55.043091   31745 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 18:29:55.045041   31745 out.go:177] 
	W0719 18:29:55.046291   31745 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0719 18:29:55.047518   31745 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-788436 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-788436 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-788436 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (134.226099ms)

                                                
                                                
-- stdout --
	* [functional-788436] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19307
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19307-14841/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19307-14841/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 18:29:55.223934   31827 out.go:291] Setting OutFile to fd 1 ...
	I0719 18:29:55.224066   31827 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:29:55.224075   31827 out.go:304] Setting ErrFile to fd 2...
	I0719 18:29:55.224079   31827 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 18:29:55.224332   31827 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 18:29:55.224793   31827 out.go:298] Setting JSON to false
	I0719 18:29:55.225694   31827 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4338,"bootTime":1721409457,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 18:29:55.225745   31827 start.go:139] virtualization: kvm guest
	I0719 18:29:55.227796   31827 out.go:177] * [functional-788436] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0719 18:29:55.229194   31827 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 18:29:55.229217   31827 notify.go:220] Checking for updates...
	I0719 18:29:55.231562   31827 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 18:29:55.232782   31827 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 18:29:55.233926   31827 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 18:29:55.235094   31827 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 18:29:55.236406   31827 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 18:29:55.238207   31827 config.go:182] Loaded profile config "functional-788436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 18:29:55.238829   31827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:29:55.238901   31827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:29:55.254813   31827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35835
	I0719 18:29:55.255188   31827 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:29:55.255704   31827 main.go:141] libmachine: Using API Version  1
	I0719 18:29:55.255726   31827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:29:55.256071   31827 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:29:55.256279   31827 main.go:141] libmachine: (functional-788436) Calling .DriverName
	I0719 18:29:55.256542   31827 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 18:29:55.256967   31827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 18:29:55.257013   31827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 18:29:55.272190   31827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37309
	I0719 18:29:55.272618   31827 main.go:141] libmachine: () Calling .GetVersion
	I0719 18:29:55.273022   31827 main.go:141] libmachine: Using API Version  1
	I0719 18:29:55.273043   31827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 18:29:55.273287   31827 main.go:141] libmachine: () Calling .GetMachineName
	I0719 18:29:55.273450   31827 main.go:141] libmachine: (functional-788436) Calling .DriverName
	I0719 18:29:55.304566   31827 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0719 18:29:55.305851   31827 start.go:297] selected driver: kvm2
	I0719 18:29:55.305867   31827 start.go:901] validating driver "kvm2" against &{Name:functional-788436 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-788436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 18:29:55.305966   31827 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 18:29:55.307981   31827 out.go:177] 
	W0719 18:29:55.309242   31827 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0719 18:29:55.310267   31827 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-788436 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-788436 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-bvn7v" [b54d6279-5ffc-4757-b993-cfa535dc0410] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-bvn7v" [b54d6279-5ffc-4757-b993-cfa535dc0410] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004047491s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.190:31585
functional_test.go:1671: http://192.168.39.190:31585: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-bvn7v

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.190:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.190:31585
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.58s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [e33806ba-8abb-42e3-878c-aac763cfa3e2] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.007324102s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-788436 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-788436 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-788436 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-788436 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [361b5f78-b256-47a6-8b55-5a1f4889b59d] Pending
helpers_test.go:344: "sp-pod" [361b5f78-b256-47a6-8b55-5a1f4889b59d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [361b5f78-b256-47a6-8b55-5a1f4889b59d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004786662s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-788436 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-788436 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-788436 delete -f testdata/storage-provisioner/pod.yaml: (3.108790758s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-788436 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f6d58e5a-e774-4fed-a486-7dc95ba68ee4] Pending
helpers_test.go:344: "sp-pod" [f6d58e5a-e774-4fed-a486-7dc95ba68ee4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f6d58e5a-e774-4fed-a486-7dc95ba68ee4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.004528877s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-788436 exec sp-pod -- ls /tmp/mount
2024/07/19 18:30:29 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.90s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 ssh -n functional-788436 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 cp functional-788436:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2077502441/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 ssh -n functional-788436 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 ssh -n functional-788436 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-788436 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-chncb" [3e1c6fc2-d874-48bd-bca1-616ea61f19d4] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-chncb" [3e1c6fc2-d874-48bd-bca1-616ea61f19d4] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.003721384s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-788436 exec mysql-64454c8b5c-chncb -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-788436 exec mysql-64454c8b5c-chncb -- mysql -ppassword -e "show databases;": exit status 1 (120.364854ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-788436 exec mysql-64454c8b5c-chncb -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.76s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/22028/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 ssh "sudo cat /etc/test/nested/copy/22028/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/22028.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 ssh "sudo cat /etc/ssl/certs/22028.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/22028.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 ssh "sudo cat /usr/share/ca-certificates/22028.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/220282.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 ssh "sudo cat /etc/ssl/certs/220282.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/220282.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 ssh "sudo cat /usr/share/ca-certificates/220282.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-788436 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-788436 ssh "sudo systemctl is-active docker": exit status 1 (193.927972ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-788436 ssh "sudo systemctl is-active containerd": exit status 1 (196.147255ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-788436 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-788436
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240715-585640e9
docker.io/kicbase/echo-server:functional-788436
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-788436 image ls --format short --alsologtostderr:
I0719 18:30:18.259335   32790 out.go:291] Setting OutFile to fd 1 ...
I0719 18:30:18.259517   32790 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 18:30:18.259530   32790 out.go:304] Setting ErrFile to fd 2...
I0719 18:30:18.259537   32790 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 18:30:18.259867   32790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
I0719 18:30:18.260633   32790 config.go:182] Loaded profile config "functional-788436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 18:30:18.260775   32790 config.go:182] Loaded profile config "functional-788436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 18:30:18.261351   32790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0719 18:30:18.261406   32790 main.go:141] libmachine: Launching plugin server for driver kvm2
I0719 18:30:18.283554   32790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37977
I0719 18:30:18.284046   32790 main.go:141] libmachine: () Calling .GetVersion
I0719 18:30:18.284640   32790 main.go:141] libmachine: Using API Version  1
I0719 18:30:18.284664   32790 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 18:30:18.285017   32790 main.go:141] libmachine: () Calling .GetMachineName
I0719 18:30:18.285220   32790 main.go:141] libmachine: (functional-788436) Calling .GetState
I0719 18:30:18.287015   32790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0719 18:30:18.287058   32790 main.go:141] libmachine: Launching plugin server for driver kvm2
I0719 18:30:18.303246   32790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45945
I0719 18:30:18.303665   32790 main.go:141] libmachine: () Calling .GetVersion
I0719 18:30:18.304231   32790 main.go:141] libmachine: Using API Version  1
I0719 18:30:18.304295   32790 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 18:30:18.304589   32790 main.go:141] libmachine: () Calling .GetMachineName
I0719 18:30:18.304763   32790 main.go:141] libmachine: (functional-788436) Calling .DriverName
I0719 18:30:18.304942   32790 ssh_runner.go:195] Run: systemctl --version
I0719 18:30:18.304961   32790 main.go:141] libmachine: (functional-788436) Calling .GetSSHHostname
I0719 18:30:18.307359   32790 main.go:141] libmachine: (functional-788436) DBG | domain functional-788436 has defined MAC address 52:54:00:45:0d:62 in network mk-functional-788436
I0719 18:30:18.307743   32790 main.go:141] libmachine: (functional-788436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:0d:62", ip: ""} in network mk-functional-788436: {Iface:virbr1 ExpiryTime:2024-07-19 19:27:05 +0000 UTC Type:0 Mac:52:54:00:45:0d:62 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:functional-788436 Clientid:01:52:54:00:45:0d:62}
I0719 18:30:18.307776   32790 main.go:141] libmachine: (functional-788436) DBG | domain functional-788436 has defined IP address 192.168.39.190 and MAC address 52:54:00:45:0d:62 in network mk-functional-788436
I0719 18:30:18.307884   32790 main.go:141] libmachine: (functional-788436) Calling .GetSSHPort
I0719 18:30:18.308036   32790 main.go:141] libmachine: (functional-788436) Calling .GetSSHKeyPath
I0719 18:30:18.308202   32790 main.go:141] libmachine: (functional-788436) Calling .GetSSHUsername
I0719 18:30:18.308360   32790 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/functional-788436/id_rsa Username:docker}
I0719 18:30:18.432457   32790 ssh_runner.go:195] Run: sudo crictl images --output json
I0719 18:30:18.517628   32790 main.go:141] libmachine: Making call to close driver server
I0719 18:30:18.517641   32790 main.go:141] libmachine: (functional-788436) Calling .Close
I0719 18:30:18.517933   32790 main.go:141] libmachine: Successfully made call to close driver server
I0719 18:30:18.517949   32790 main.go:141] libmachine: Making call to close connection to plugin binary
I0719 18:30:18.517960   32790 main.go:141] libmachine: Making call to close driver server
I0719 18:30:18.517970   32790 main.go:141] libmachine: (functional-788436) Calling .Close
I0719 18:30:18.518206   32790 main.go:141] libmachine: (functional-788436) DBG | Closing plugin on server side
I0719 18:30:18.518238   32790 main.go:141] libmachine: Successfully made call to close driver server
I0719 18:30:18.518246   32790 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-788436 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.30.3            | 1f6d574d502f3 | 118MB  |
| registry.k8s.io/kube-controller-manager | v1.30.3            | 76932a3b37d7e | 112MB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5cc3abe5717db | 87.2MB |
| docker.io/library/nginx                 | latest             | fffffc90d343c | 192MB  |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-proxy              | v1.30.3            | 55bb025d2cfa5 | 86MB   |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kicbase/echo-server           | functional-788436  | 9056ab77afb8e | 4.94MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| localhost/minikube-local-cache-test     | functional-788436  | 8d275a8f951c9 | 3.33kB |
| registry.k8s.io/kube-scheduler          | v1.30.3            | 3edc18e7b7672 | 63.1MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-788436 image ls --format table --alsologtostderr:
I0719 18:30:18.866382   32923 out.go:291] Setting OutFile to fd 1 ...
I0719 18:30:18.866509   32923 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 18:30:18.866520   32923 out.go:304] Setting ErrFile to fd 2...
I0719 18:30:18.866527   32923 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 18:30:18.866785   32923 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
I0719 18:30:18.867519   32923 config.go:182] Loaded profile config "functional-788436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 18:30:18.867676   32923 config.go:182] Loaded profile config "functional-788436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 18:30:18.868223   32923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0719 18:30:18.868270   32923 main.go:141] libmachine: Launching plugin server for driver kvm2
I0719 18:30:18.883497   32923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39655
I0719 18:30:18.883923   32923 main.go:141] libmachine: () Calling .GetVersion
I0719 18:30:18.884455   32923 main.go:141] libmachine: Using API Version  1
I0719 18:30:18.884477   32923 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 18:30:18.884886   32923 main.go:141] libmachine: () Calling .GetMachineName
I0719 18:30:18.885068   32923 main.go:141] libmachine: (functional-788436) Calling .GetState
I0719 18:30:18.887005   32923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0719 18:30:18.887056   32923 main.go:141] libmachine: Launching plugin server for driver kvm2
I0719 18:30:18.901847   32923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44637
I0719 18:30:18.902344   32923 main.go:141] libmachine: () Calling .GetVersion
I0719 18:30:18.902899   32923 main.go:141] libmachine: Using API Version  1
I0719 18:30:18.902924   32923 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 18:30:18.903322   32923 main.go:141] libmachine: () Calling .GetMachineName
I0719 18:30:18.903527   32923 main.go:141] libmachine: (functional-788436) Calling .DriverName
I0719 18:30:18.903772   32923 ssh_runner.go:195] Run: systemctl --version
I0719 18:30:18.903797   32923 main.go:141] libmachine: (functional-788436) Calling .GetSSHHostname
I0719 18:30:18.906396   32923 main.go:141] libmachine: (functional-788436) DBG | domain functional-788436 has defined MAC address 52:54:00:45:0d:62 in network mk-functional-788436
I0719 18:30:18.906798   32923 main.go:141] libmachine: (functional-788436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:0d:62", ip: ""} in network mk-functional-788436: {Iface:virbr1 ExpiryTime:2024-07-19 19:27:05 +0000 UTC Type:0 Mac:52:54:00:45:0d:62 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:functional-788436 Clientid:01:52:54:00:45:0d:62}
I0719 18:30:18.906831   32923 main.go:141] libmachine: (functional-788436) DBG | domain functional-788436 has defined IP address 192.168.39.190 and MAC address 52:54:00:45:0d:62 in network mk-functional-788436
I0719 18:30:18.906969   32923 main.go:141] libmachine: (functional-788436) Calling .GetSSHPort
I0719 18:30:18.907145   32923 main.go:141] libmachine: (functional-788436) Calling .GetSSHKeyPath
I0719 18:30:18.907300   32923 main.go:141] libmachine: (functional-788436) Calling .GetSSHUsername
I0719 18:30:18.907446   32923 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/functional-788436/id_rsa Username:docker}
I0719 18:30:19.020636   32923 ssh_runner.go:195] Run: sudo crictl images --output json
I0719 18:30:19.119348   32923 main.go:141] libmachine: Making call to close driver server
I0719 18:30:19.119366   32923 main.go:141] libmachine: (functional-788436) Calling .Close
I0719 18:30:19.119629   32923 main.go:141] libmachine: Successfully made call to close driver server
I0719 18:30:19.119651   32923 main.go:141] libmachine: Making call to close connection to plugin binary
I0719 18:30:19.119667   32923 main.go:141] libmachine: (functional-788436) DBG | Closing plugin on server side
I0719 18:30:19.119677   32923 main.go:141] libmachine: Making call to close driver server
I0719 18:30:19.119688   32923 main.go:141] libmachine: (functional-788436) Calling .Close
I0719 18:30:19.119925   32923 main.go:141] libmachine: Successfully made call to close driver server
I0719 18:30:19.119940   32923 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-788436 image ls --format json --alsologtostderr:
[{"id":"5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f","repoDigests":["docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115","docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"87165492"},{"id":"fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c","repoDigests":["docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df","docker.io/library/nginx@sha256:db5e49f40979ce521f05f0bc9f513d0abacce47904e229f3a95c2e6d9b47f244"],"repoTags":["docker.io/library/nginx:latest"],"size":"191746190"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],
"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:functional-788436"],"size":"4943877"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":["registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f582
21796631e107966c","registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117609954"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["dock
er.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"8d275a8f951c9af29c934c14240edbed82bac73adec8fb00c186a7579d6f0ea3","repoDigests":["localhost/minikube-local-cache-test@sha256:7d31a17d543266459564bcb1262c020c4ad7aef567c75f469d2312b99e0c7d48"],"repoTags":["localhost/minikube-local-cache-test:functional-788436"],"size":"3330"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@s
ha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7","registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"112198984"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":["registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"85953945"},{"id":"3edc18e7b76722eb2eb37a08
58c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"63051080"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["reg
istry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-788436 image ls --format json --alsologtostderr:
I0719 18:30:18.600986   32856 out.go:291] Setting OutFile to fd 1 ...
I0719 18:30:18.601093   32856 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 18:30:18.601103   32856 out.go:304] Setting ErrFile to fd 2...
I0719 18:30:18.601107   32856 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 18:30:18.601281   32856 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
I0719 18:30:18.601782   32856 config.go:182] Loaded profile config "functional-788436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 18:30:18.601875   32856 config.go:182] Loaded profile config "functional-788436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 18:30:18.602199   32856 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0719 18:30:18.602233   32856 main.go:141] libmachine: Launching plugin server for driver kvm2
I0719 18:30:18.617128   32856 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36353
I0719 18:30:18.617606   32856 main.go:141] libmachine: () Calling .GetVersion
I0719 18:30:18.618215   32856 main.go:141] libmachine: Using API Version  1
I0719 18:30:18.618242   32856 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 18:30:18.618629   32856 main.go:141] libmachine: () Calling .GetMachineName
I0719 18:30:18.618839   32856 main.go:141] libmachine: (functional-788436) Calling .GetState
I0719 18:30:18.620839   32856 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0719 18:30:18.620878   32856 main.go:141] libmachine: Launching plugin server for driver kvm2
I0719 18:30:18.635199   32856 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46687
I0719 18:30:18.635600   32856 main.go:141] libmachine: () Calling .GetVersion
I0719 18:30:18.636108   32856 main.go:141] libmachine: Using API Version  1
I0719 18:30:18.636126   32856 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 18:30:18.636402   32856 main.go:141] libmachine: () Calling .GetMachineName
I0719 18:30:18.636561   32856 main.go:141] libmachine: (functional-788436) Calling .DriverName
I0719 18:30:18.636733   32856 ssh_runner.go:195] Run: systemctl --version
I0719 18:30:18.636750   32856 main.go:141] libmachine: (functional-788436) Calling .GetSSHHostname
I0719 18:30:18.639157   32856 main.go:141] libmachine: (functional-788436) DBG | domain functional-788436 has defined MAC address 52:54:00:45:0d:62 in network mk-functional-788436
I0719 18:30:18.639530   32856 main.go:141] libmachine: (functional-788436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:0d:62", ip: ""} in network mk-functional-788436: {Iface:virbr1 ExpiryTime:2024-07-19 19:27:05 +0000 UTC Type:0 Mac:52:54:00:45:0d:62 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:functional-788436 Clientid:01:52:54:00:45:0d:62}
I0719 18:30:18.639552   32856 main.go:141] libmachine: (functional-788436) DBG | domain functional-788436 has defined IP address 192.168.39.190 and MAC address 52:54:00:45:0d:62 in network mk-functional-788436
I0719 18:30:18.639702   32856 main.go:141] libmachine: (functional-788436) Calling .GetSSHPort
I0719 18:30:18.639907   32856 main.go:141] libmachine: (functional-788436) Calling .GetSSHKeyPath
I0719 18:30:18.640049   32856 main.go:141] libmachine: (functional-788436) Calling .GetSSHUsername
I0719 18:30:18.640187   32856 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/functional-788436/id_rsa Username:docker}
I0719 18:30:18.754069   32856 ssh_runner.go:195] Run: sudo crictl images --output json
I0719 18:30:18.813433   32856 main.go:141] libmachine: Making call to close driver server
I0719 18:30:18.813444   32856 main.go:141] libmachine: (functional-788436) Calling .Close
I0719 18:30:18.813675   32856 main.go:141] libmachine: Successfully made call to close driver server
I0719 18:30:18.813693   32856 main.go:141] libmachine: Making call to close connection to plugin binary
I0719 18:30:18.813707   32856 main.go:141] libmachine: Making call to close driver server
I0719 18:30:18.813715   32856 main.go:141] libmachine: (functional-788436) Calling .Close
I0719 18:30:18.813937   32856 main.go:141] libmachine: Successfully made call to close driver server
I0719 18:30:18.813956   32856 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-788436 image ls --format yaml --alsologtostderr:
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:functional-788436
size: "4943877"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "63051080"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c
repoDigests:
- docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df
- docker.io/library/nginx@sha256:db5e49f40979ce521f05f0bc9f513d0abacce47904e229f3a95c2e6d9b47f244
repoTags:
- docker.io/library/nginx:latest
size: "191746190"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 8d275a8f951c9af29c934c14240edbed82bac73adec8fb00c186a7579d6f0ea3
repoDigests:
- localhost/minikube-local-cache-test@sha256:7d31a17d543266459564bcb1262c020c4ad7aef567c75f469d2312b99e0c7d48
repoTags:
- localhost/minikube-local-cache-test:functional-788436
size: "3330"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
- registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "112198984"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests:
- registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "85953945"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f
repoDigests:
- docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "87165492"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
- registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117609954"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-788436 image ls --format yaml --alsologtostderr:
I0719 18:30:18.320642   32813 out.go:291] Setting OutFile to fd 1 ...
I0719 18:30:18.320741   32813 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 18:30:18.320746   32813 out.go:304] Setting ErrFile to fd 2...
I0719 18:30:18.320750   32813 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 18:30:18.320910   32813 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
I0719 18:30:18.323888   32813 config.go:182] Loaded profile config "functional-788436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 18:30:18.324099   32813 config.go:182] Loaded profile config "functional-788436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 18:30:18.324643   32813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0719 18:30:18.324691   32813 main.go:141] libmachine: Launching plugin server for driver kvm2
I0719 18:30:18.340033   32813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33327
I0719 18:30:18.340469   32813 main.go:141] libmachine: () Calling .GetVersion
I0719 18:30:18.341054   32813 main.go:141] libmachine: Using API Version  1
I0719 18:30:18.341085   32813 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 18:30:18.341387   32813 main.go:141] libmachine: () Calling .GetMachineName
I0719 18:30:18.341558   32813 main.go:141] libmachine: (functional-788436) Calling .GetState
I0719 18:30:18.343625   32813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0719 18:30:18.343670   32813 main.go:141] libmachine: Launching plugin server for driver kvm2
I0719 18:30:18.359012   32813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44743
I0719 18:30:18.359428   32813 main.go:141] libmachine: () Calling .GetVersion
I0719 18:30:18.359952   32813 main.go:141] libmachine: Using API Version  1
I0719 18:30:18.359977   32813 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 18:30:18.360385   32813 main.go:141] libmachine: () Calling .GetMachineName
I0719 18:30:18.360551   32813 main.go:141] libmachine: (functional-788436) Calling .DriverName
I0719 18:30:18.360757   32813 ssh_runner.go:195] Run: systemctl --version
I0719 18:30:18.360778   32813 main.go:141] libmachine: (functional-788436) Calling .GetSSHHostname
I0719 18:30:18.363387   32813 main.go:141] libmachine: (functional-788436) DBG | domain functional-788436 has defined MAC address 52:54:00:45:0d:62 in network mk-functional-788436
I0719 18:30:18.363799   32813 main.go:141] libmachine: (functional-788436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:0d:62", ip: ""} in network mk-functional-788436: {Iface:virbr1 ExpiryTime:2024-07-19 19:27:05 +0000 UTC Type:0 Mac:52:54:00:45:0d:62 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:functional-788436 Clientid:01:52:54:00:45:0d:62}
I0719 18:30:18.363827   32813 main.go:141] libmachine: (functional-788436) DBG | domain functional-788436 has defined IP address 192.168.39.190 and MAC address 52:54:00:45:0d:62 in network mk-functional-788436
I0719 18:30:18.363989   32813 main.go:141] libmachine: (functional-788436) Calling .GetSSHPort
I0719 18:30:18.364144   32813 main.go:141] libmachine: (functional-788436) Calling .GetSSHKeyPath
I0719 18:30:18.364280   32813 main.go:141] libmachine: (functional-788436) Calling .GetSSHUsername
I0719 18:30:18.364411   32813 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/functional-788436/id_rsa Username:docker}
I0719 18:30:18.468027   32813 ssh_runner.go:195] Run: sudo crictl images --output json
I0719 18:30:18.545071   32813 main.go:141] libmachine: Making call to close driver server
I0719 18:30:18.545084   32813 main.go:141] libmachine: (functional-788436) Calling .Close
I0719 18:30:18.545350   32813 main.go:141] libmachine: Successfully made call to close driver server
I0719 18:30:18.545367   32813 main.go:141] libmachine: Making call to close connection to plugin binary
I0719 18:30:18.545387   32813 main.go:141] libmachine: Making call to close driver server
I0719 18:30:18.545396   32813 main.go:141] libmachine: (functional-788436) Calling .Close
I0719 18:30:18.545594   32813 main.go:141] libmachine: Successfully made call to close driver server
I0719 18:30:18.545608   32813 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-788436 ssh pgrep buildkitd: exit status 1 (220.211249ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 image build -t localhost/my-image:functional-788436 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-788436 image build -t localhost/my-image:functional-788436 testdata/build --alsologtostderr: (5.182564313s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-788436 image build -t localhost/my-image:functional-788436 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> f870725feb3
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-788436
--> 91657da7264
Successfully tagged localhost/my-image:functional-788436
91657da7264eff4328f72b43e334dfca7e90f86b73ef3dcb24b222ab086cac97
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-788436 image build -t localhost/my-image:functional-788436 testdata/build --alsologtostderr:
I0719 18:30:18.786347   32898 out.go:291] Setting OutFile to fd 1 ...
I0719 18:30:18.786485   32898 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 18:30:18.786495   32898 out.go:304] Setting ErrFile to fd 2...
I0719 18:30:18.786499   32898 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 18:30:18.786709   32898 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
I0719 18:30:18.787271   32898 config.go:182] Loaded profile config "functional-788436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 18:30:18.787857   32898 config.go:182] Loaded profile config "functional-788436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 18:30:18.788372   32898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0719 18:30:18.788442   32898 main.go:141] libmachine: Launching plugin server for driver kvm2
I0719 18:30:18.802923   32898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42765
I0719 18:30:18.803360   32898 main.go:141] libmachine: () Calling .GetVersion
I0719 18:30:18.803953   32898 main.go:141] libmachine: Using API Version  1
I0719 18:30:18.803978   32898 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 18:30:18.804309   32898 main.go:141] libmachine: () Calling .GetMachineName
I0719 18:30:18.804505   32898 main.go:141] libmachine: (functional-788436) Calling .GetState
I0719 18:30:18.806182   32898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0719 18:30:18.806216   32898 main.go:141] libmachine: Launching plugin server for driver kvm2
I0719 18:30:18.821587   32898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38377
I0719 18:30:18.821993   32898 main.go:141] libmachine: () Calling .GetVersion
I0719 18:30:18.822572   32898 main.go:141] libmachine: Using API Version  1
I0719 18:30:18.822602   32898 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 18:30:18.822937   32898 main.go:141] libmachine: () Calling .GetMachineName
I0719 18:30:18.823332   32898 main.go:141] libmachine: (functional-788436) Calling .DriverName
I0719 18:30:18.823578   32898 ssh_runner.go:195] Run: systemctl --version
I0719 18:30:18.823608   32898 main.go:141] libmachine: (functional-788436) Calling .GetSSHHostname
I0719 18:30:18.827119   32898 main.go:141] libmachine: (functional-788436) DBG | domain functional-788436 has defined MAC address 52:54:00:45:0d:62 in network mk-functional-788436
I0719 18:30:18.827662   32898 main.go:141] libmachine: (functional-788436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:0d:62", ip: ""} in network mk-functional-788436: {Iface:virbr1 ExpiryTime:2024-07-19 19:27:05 +0000 UTC Type:0 Mac:52:54:00:45:0d:62 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:functional-788436 Clientid:01:52:54:00:45:0d:62}
I0719 18:30:18.827698   32898 main.go:141] libmachine: (functional-788436) DBG | domain functional-788436 has defined IP address 192.168.39.190 and MAC address 52:54:00:45:0d:62 in network mk-functional-788436
I0719 18:30:18.827818   32898 main.go:141] libmachine: (functional-788436) Calling .GetSSHPort
I0719 18:30:18.828022   32898 main.go:141] libmachine: (functional-788436) Calling .GetSSHKeyPath
I0719 18:30:18.828203   32898 main.go:141] libmachine: (functional-788436) Calling .GetSSHUsername
I0719 18:30:18.828375   32898 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/functional-788436/id_rsa Username:docker}
I0719 18:30:18.941971   32898 build_images.go:161] Building image from path: /tmp/build.3871820181.tar
I0719 18:30:18.942030   32898 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0719 18:30:18.957875   32898 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3871820181.tar
I0719 18:30:18.970702   32898 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3871820181.tar: stat -c "%s %y" /var/lib/minikube/build/build.3871820181.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3871820181.tar': No such file or directory
I0719 18:30:18.970740   32898 ssh_runner.go:362] scp /tmp/build.3871820181.tar --> /var/lib/minikube/build/build.3871820181.tar (3072 bytes)
I0719 18:30:19.015729   32898 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3871820181
I0719 18:30:19.044876   32898 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3871820181 -xf /var/lib/minikube/build/build.3871820181.tar
I0719 18:30:19.075066   32898 crio.go:315] Building image: /var/lib/minikube/build/build.3871820181
I0719 18:30:19.075129   32898 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-788436 /var/lib/minikube/build/build.3871820181 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0719 18:30:23.896869   32898 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-788436 /var/lib/minikube/build/build.3871820181 --cgroup-manager=cgroupfs: (4.821717823s)
I0719 18:30:23.896945   32898 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3871820181
I0719 18:30:23.912257   32898 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3871820181.tar
I0719 18:30:23.921194   32898 build_images.go:217] Built localhost/my-image:functional-788436 from /tmp/build.3871820181.tar
I0719 18:30:23.921226   32898 build_images.go:133] succeeded building to: functional-788436
I0719 18:30:23.921231   32898 build_images.go:134] failed building to: 
I0719 18:30:23.921260   32898 main.go:141] libmachine: Making call to close driver server
I0719 18:30:23.921277   32898 main.go:141] libmachine: (functional-788436) Calling .Close
I0719 18:30:23.921560   32898 main.go:141] libmachine: (functional-788436) DBG | Closing plugin on server side
I0719 18:30:23.921568   32898 main.go:141] libmachine: Successfully made call to close driver server
I0719 18:30:23.921581   32898 main.go:141] libmachine: Making call to close connection to plugin binary
I0719 18:30:23.921589   32898 main.go:141] libmachine: Making call to close driver server
I0719 18:30:23.921598   32898 main.go:141] libmachine: (functional-788436) Calling .Close
I0719 18:30:23.921844   32898 main.go:141] libmachine: Successfully made call to close driver server
I0719 18:30:23.921856   32898 main.go:141] libmachine: Making call to close connection to plugin binary
I0719 18:30:23.921874   32898 main.go:141] libmachine: (functional-788436) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.767297319s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-788436
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-788436 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-788436 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-cgj2f" [34ae4b2b-f5a1-43aa-be67-a965a3d931b2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-cgj2f" [34ae4b2b-f5a1-43aa-be67-a965a3d931b2] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.00394241s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 image load --daemon docker.io/kicbase/echo-server:functional-788436 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-788436 image load --daemon docker.io/kicbase/echo-server:functional-788436 --alsologtostderr: (2.422226398s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 image load --daemon docker.io/kicbase/echo-server:functional-788436 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-788436
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 image load --daemon docker.io/kicbase/echo-server:functional-788436 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 image save docker.io/kicbase/echo-server:functional-788436 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 image rm docker.io/kicbase/echo-server:functional-788436 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-788436
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 image save --daemon docker.io/kicbase/echo-server:functional-788436 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-788436
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "246.886686ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "42.783629ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "228.424604ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "42.735645ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (20.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-788436 /tmp/TestFunctionalparallelMountCmdany-port1993969944/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1721413794441527884" to /tmp/TestFunctionalparallelMountCmdany-port1993969944/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1721413794441527884" to /tmp/TestFunctionalparallelMountCmdany-port1993969944/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1721413794441527884" to /tmp/TestFunctionalparallelMountCmdany-port1993969944/001/test-1721413794441527884
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-788436 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (240.891302ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 19 18:29 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 19 18:29 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 19 18:29 test-1721413794441527884
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 ssh cat /mount-9p/test-1721413794441527884
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-788436 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [7766384c-d2dc-4335-bf2a-6d6dfa455b57] Pending
helpers_test.go:344: "busybox-mount" [7766384c-d2dc-4335-bf2a-6d6dfa455b57] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [7766384c-d2dc-4335-bf2a-6d6dfa455b57] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [7766384c-d2dc-4335-bf2a-6d6dfa455b57] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 18.003327313s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-788436 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-788436 /tmp/TestFunctionalparallelMountCmdany-port1993969944/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (20.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 service list -o json
functional_test.go:1490: Took "292.626021ms" to run "out/minikube-linux-amd64 -p functional-788436 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.190:32360
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.190:32360
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-788436 /tmp/TestFunctionalparallelMountCmdspecific-port3658595968/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-788436 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (194.706568ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 ssh -- ls -la /mount-9p
E0719 18:30:16.271339   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-788436 /tmp/TestFunctionalparallelMountCmdspecific-port3658595968/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-788436 ssh "sudo umount -f /mount-9p": exit status 1 (188.073565ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-788436 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-788436 /tmp/TestFunctionalparallelMountCmdspecific-port3658595968/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-788436 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4117990599/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-788436 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4117990599/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-788436 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4117990599/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-788436 ssh "findmnt -T" /mount1: exit status 1 (270.14482ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-788436 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-788436 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-788436 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4117990599/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-788436 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4117990599/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-788436 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4117990599/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.37s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-788436
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-788436
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-788436
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (206.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-269108 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0719 18:32:32.426058   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
E0719 18:33:00.111671   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-269108 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m25.572898414s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (206.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-269108 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-269108 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-269108 -- rollout status deployment/busybox: (4.282677175s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-269108 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-269108 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-269108 -- exec busybox-fc5497c4f-8qqtr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-269108 -- exec busybox-fc5497c4f-hds6j -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-269108 -- exec busybox-fc5497c4f-lrphf -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-269108 -- exec busybox-fc5497c4f-8qqtr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-269108 -- exec busybox-fc5497c4f-hds6j -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-269108 -- exec busybox-fc5497c4f-lrphf -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-269108 -- exec busybox-fc5497c4f-8qqtr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-269108 -- exec busybox-fc5497c4f-hds6j -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-269108 -- exec busybox-fc5497c4f-lrphf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-269108 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-269108 -- exec busybox-fc5497c4f-8qqtr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-269108 -- exec busybox-fc5497c4f-8qqtr -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-269108 -- exec busybox-fc5497c4f-hds6j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-269108 -- exec busybox-fc5497c4f-hds6j -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-269108 -- exec busybox-fc5497c4f-lrphf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-269108 -- exec busybox-fc5497c4f-lrphf -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-269108 -v=7 --alsologtostderr
E0719 18:34:43.525033   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/functional-788436/client.crt: no such file or directory
E0719 18:34:43.530371   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/functional-788436/client.crt: no such file or directory
E0719 18:34:43.540653   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/functional-788436/client.crt: no such file or directory
E0719 18:34:43.561020   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/functional-788436/client.crt: no such file or directory
E0719 18:34:43.601286   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/functional-788436/client.crt: no such file or directory
E0719 18:34:43.681650   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/functional-788436/client.crt: no such file or directory
E0719 18:34:43.841936   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/functional-788436/client.crt: no such file or directory
E0719 18:34:44.162255   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/functional-788436/client.crt: no such file or directory
E0719 18:34:44.802549   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/functional-788436/client.crt: no such file or directory
E0719 18:34:46.083352   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/functional-788436/client.crt: no such file or directory
E0719 18:34:48.643966   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/functional-788436/client.crt: no such file or directory
E0719 18:34:53.764755   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/functional-788436/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-269108 -v=7 --alsologtostderr: (55.771884741s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-269108 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 cp testdata/cp-test.txt ha-269108:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 ssh -n ha-269108 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 cp ha-269108:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile835163817/001/cp-test_ha-269108.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 ssh -n ha-269108 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 cp ha-269108:/home/docker/cp-test.txt ha-269108-m02:/home/docker/cp-test_ha-269108_ha-269108-m02.txt
E0719 18:35:04.005355   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/functional-788436/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 ssh -n ha-269108 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 ssh -n ha-269108-m02 "sudo cat /home/docker/cp-test_ha-269108_ha-269108-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 cp ha-269108:/home/docker/cp-test.txt ha-269108-m03:/home/docker/cp-test_ha-269108_ha-269108-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 ssh -n ha-269108 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 ssh -n ha-269108-m03 "sudo cat /home/docker/cp-test_ha-269108_ha-269108-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 cp ha-269108:/home/docker/cp-test.txt ha-269108-m04:/home/docker/cp-test_ha-269108_ha-269108-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 ssh -n ha-269108 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 ssh -n ha-269108-m04 "sudo cat /home/docker/cp-test_ha-269108_ha-269108-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 cp testdata/cp-test.txt ha-269108-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 ssh -n ha-269108-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 cp ha-269108-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile835163817/001/cp-test_ha-269108-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 ssh -n ha-269108-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 cp ha-269108-m02:/home/docker/cp-test.txt ha-269108:/home/docker/cp-test_ha-269108-m02_ha-269108.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 ssh -n ha-269108-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 ssh -n ha-269108 "sudo cat /home/docker/cp-test_ha-269108-m02_ha-269108.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 cp ha-269108-m02:/home/docker/cp-test.txt ha-269108-m03:/home/docker/cp-test_ha-269108-m02_ha-269108-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 ssh -n ha-269108-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 ssh -n ha-269108-m03 "sudo cat /home/docker/cp-test_ha-269108-m02_ha-269108-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 cp ha-269108-m02:/home/docker/cp-test.txt ha-269108-m04:/home/docker/cp-test_ha-269108-m02_ha-269108-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 ssh -n ha-269108-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 ssh -n ha-269108-m04 "sudo cat /home/docker/cp-test_ha-269108-m02_ha-269108-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 cp testdata/cp-test.txt ha-269108-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 ssh -n ha-269108-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 cp ha-269108-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile835163817/001/cp-test_ha-269108-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 ssh -n ha-269108-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 cp ha-269108-m03:/home/docker/cp-test.txt ha-269108:/home/docker/cp-test_ha-269108-m03_ha-269108.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 ssh -n ha-269108-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 ssh -n ha-269108 "sudo cat /home/docker/cp-test_ha-269108-m03_ha-269108.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 cp ha-269108-m03:/home/docker/cp-test.txt ha-269108-m02:/home/docker/cp-test_ha-269108-m03_ha-269108-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 ssh -n ha-269108-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 ssh -n ha-269108-m02 "sudo cat /home/docker/cp-test_ha-269108-m03_ha-269108-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 cp ha-269108-m03:/home/docker/cp-test.txt ha-269108-m04:/home/docker/cp-test_ha-269108-m03_ha-269108-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 ssh -n ha-269108-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 ssh -n ha-269108-m04 "sudo cat /home/docker/cp-test_ha-269108-m03_ha-269108-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 cp testdata/cp-test.txt ha-269108-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 ssh -n ha-269108-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 cp ha-269108-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile835163817/001/cp-test_ha-269108-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 ssh -n ha-269108-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 cp ha-269108-m04:/home/docker/cp-test.txt ha-269108:/home/docker/cp-test_ha-269108-m04_ha-269108.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 ssh -n ha-269108-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 ssh -n ha-269108 "sudo cat /home/docker/cp-test_ha-269108-m04_ha-269108.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 cp ha-269108-m04:/home/docker/cp-test.txt ha-269108-m02:/home/docker/cp-test_ha-269108-m04_ha-269108-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 ssh -n ha-269108-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 ssh -n ha-269108-m02 "sudo cat /home/docker/cp-test_ha-269108-m04_ha-269108-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 cp ha-269108-m04:/home/docker/cp-test.txt ha-269108-m03:/home/docker/cp-test_ha-269108-m04_ha-269108-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 ssh -n ha-269108-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 ssh -n ha-269108-m03 "sudo cat /home/docker/cp-test_ha-269108-m04_ha-269108-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.467930503s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 node delete m03 -v=7 --alsologtostderr
E0719 18:44:43.525302   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/functional-788436/client.crt: no such file or directory
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-269108 node delete m03 -v=7 --alsologtostderr: (16.290990639s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (349.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-269108 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0719 18:47:32.427998   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
E0719 18:49:43.524850   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/functional-788436/client.crt: no such file or directory
E0719 18:51:06.569109   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/functional-788436/client.crt: no such file or directory
E0719 18:52:32.426419   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-269108 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m49.087800896s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (349.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-269108 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-269108 --control-plane -v=7 --alsologtostderr: (1m17.340162945s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-269108 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.50s)

                                                
                                    
x
+
TestJSONOutput/start/Command (51.58s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-834522 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0719 18:54:43.525202   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/functional-788436/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-834522 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (51.577940548s)
--- PASS: TestJSONOutput/start/Command (51.58s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-834522 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-834522 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.37s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-834522 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-834522 --output=json --user=testUser: (10.372719458s)
--- PASS: TestJSONOutput/stop/Command (10.37s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-522218 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-522218 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (53.694233ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9958f1e9-be05-4411-b281-ed5e2c9e7a1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-522218] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"914b3598-d8d0-46e8-9a5f-6bb8c9a61e3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19307"}}
	{"specversion":"1.0","id":"a770a5e8-57a7-43b9-a338-afa747551ff2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e4b2f9ae-5afc-40ab-bd89-3dccb0222997","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19307-14841/kubeconfig"}}
	{"specversion":"1.0","id":"d067dc81-1d4a-4636-86e3-c66e907c2882","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19307-14841/.minikube"}}
	{"specversion":"1.0","id":"ca534199-c3a6-4db8-9585-7f6012dbed83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"2f1202ce-cf7d-47e9-8f2c-e7faf5fe15d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5980271a-fa40-4dd4-9321-969198b1c4b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-522218" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-522218
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (83.99s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-713184 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-713184 --driver=kvm2  --container-runtime=crio: (41.671083451s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-716377 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-716377 --driver=kvm2  --container-runtime=crio: (39.724252088s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-713184
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-716377
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-716377" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-716377
helpers_test.go:175: Cleaning up "first-713184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-713184
--- PASS: TestMinikubeProfile (83.99s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (26.44s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-438712 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-438712 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.436685491s)
--- PASS: TestMountStart/serial/StartWithMountFirst (26.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-438712 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-438712 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (25s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-452132 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0719 18:57:32.427970   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-452132 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.996993599s)
--- PASS: TestMountStart/serial/StartWithMountSecond (25.00s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-452132 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-452132 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-438712 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-452132 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-452132 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-452132
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-452132: (1.260947058s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.99s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-452132
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-452132: (21.990621715s)
--- PASS: TestMountStart/serial/RestartStopped (22.99s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-452132 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-452132 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (116.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-269079 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0719 18:59:43.524968   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/functional-788436/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-269079 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m55.914595208s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (116.30s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-269079 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-269079 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-269079 -- rollout status deployment/busybox: (3.7080503s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-269079 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-269079 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-269079 -- exec busybox-fc5497c4f-6fcln -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-269079 -- exec busybox-fc5497c4f-zh2rr -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-269079 -- exec busybox-fc5497c4f-6fcln -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-269079 -- exec busybox-fc5497c4f-zh2rr -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-269079 -- exec busybox-fc5497c4f-6fcln -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-269079 -- exec busybox-fc5497c4f-zh2rr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.06s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-269079 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-269079 -- exec busybox-fc5497c4f-6fcln -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-269079 -- exec busybox-fc5497c4f-6fcln -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-269079 -- exec busybox-fc5497c4f-zh2rr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-269079 -- exec busybox-fc5497c4f-zh2rr -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (52.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-269079 -v 3 --alsologtostderr
E0719 19:00:35.473017   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-269079 -v 3 --alsologtostderr: (51.923171214s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (52.46s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-269079 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 cp testdata/cp-test.txt multinode-269079:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 ssh -n multinode-269079 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 cp multinode-269079:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1789310348/001/cp-test_multinode-269079.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 ssh -n multinode-269079 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 cp multinode-269079:/home/docker/cp-test.txt multinode-269079-m02:/home/docker/cp-test_multinode-269079_multinode-269079-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 ssh -n multinode-269079 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 ssh -n multinode-269079-m02 "sudo cat /home/docker/cp-test_multinode-269079_multinode-269079-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 cp multinode-269079:/home/docker/cp-test.txt multinode-269079-m03:/home/docker/cp-test_multinode-269079_multinode-269079-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 ssh -n multinode-269079 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 ssh -n multinode-269079-m03 "sudo cat /home/docker/cp-test_multinode-269079_multinode-269079-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 cp testdata/cp-test.txt multinode-269079-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 ssh -n multinode-269079-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 cp multinode-269079-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1789310348/001/cp-test_multinode-269079-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 ssh -n multinode-269079-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 cp multinode-269079-m02:/home/docker/cp-test.txt multinode-269079:/home/docker/cp-test_multinode-269079-m02_multinode-269079.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 ssh -n multinode-269079-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 ssh -n multinode-269079 "sudo cat /home/docker/cp-test_multinode-269079-m02_multinode-269079.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 cp multinode-269079-m02:/home/docker/cp-test.txt multinode-269079-m03:/home/docker/cp-test_multinode-269079-m02_multinode-269079-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 ssh -n multinode-269079-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 ssh -n multinode-269079-m03 "sudo cat /home/docker/cp-test_multinode-269079-m02_multinode-269079-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 cp testdata/cp-test.txt multinode-269079-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 ssh -n multinode-269079-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 cp multinode-269079-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1789310348/001/cp-test_multinode-269079-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 ssh -n multinode-269079-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 cp multinode-269079-m03:/home/docker/cp-test.txt multinode-269079:/home/docker/cp-test_multinode-269079-m03_multinode-269079.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 ssh -n multinode-269079-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 ssh -n multinode-269079 "sudo cat /home/docker/cp-test_multinode-269079-m03_multinode-269079.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 cp multinode-269079-m03:/home/docker/cp-test.txt multinode-269079-m02:/home/docker/cp-test_multinode-269079-m03_multinode-269079-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 ssh -n multinode-269079-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 ssh -n multinode-269079-m02 "sudo cat /home/docker/cp-test_multinode-269079-m03_multinode-269079-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.84s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-269079 node stop m03: (1.342958212s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-269079 status: exit status 7 (405.33327ms)

                                                
                                                
-- stdout --
	multinode-269079
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-269079-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-269079-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-269079 status --alsologtostderr: exit status 7 (404.034606ms)

                                                
                                                
-- stdout --
	multinode-269079
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-269079-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-269079-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 19:01:19.470989   50285 out.go:291] Setting OutFile to fd 1 ...
	I0719 19:01:19.471224   50285 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:01:19.471233   50285 out.go:304] Setting ErrFile to fd 2...
	I0719 19:01:19.471237   50285 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:01:19.471421   50285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 19:01:19.471564   50285 out.go:298] Setting JSON to false
	I0719 19:01:19.471590   50285 mustload.go:65] Loading cluster: multinode-269079
	I0719 19:01:19.471630   50285 notify.go:220] Checking for updates...
	I0719 19:01:19.471947   50285 config.go:182] Loaded profile config "multinode-269079": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 19:01:19.471963   50285 status.go:255] checking status of multinode-269079 ...
	I0719 19:01:19.472321   50285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 19:01:19.472373   50285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:01:19.491588   50285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45303
	I0719 19:01:19.492058   50285 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:01:19.492674   50285 main.go:141] libmachine: Using API Version  1
	I0719 19:01:19.492702   50285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:01:19.492985   50285 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:01:19.493147   50285 main.go:141] libmachine: (multinode-269079) Calling .GetState
	I0719 19:01:19.494609   50285 status.go:330] multinode-269079 host status = "Running" (err=<nil>)
	I0719 19:01:19.494625   50285 host.go:66] Checking if "multinode-269079" exists ...
	I0719 19:01:19.494881   50285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 19:01:19.494910   50285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:01:19.509438   50285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45473
	I0719 19:01:19.509861   50285 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:01:19.510352   50285 main.go:141] libmachine: Using API Version  1
	I0719 19:01:19.510370   50285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:01:19.510663   50285 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:01:19.510853   50285 main.go:141] libmachine: (multinode-269079) Calling .GetIP
	I0719 19:01:19.513636   50285 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:01:19.514036   50285 main.go:141] libmachine: (multinode-269079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:cc:9c", ip: ""} in network mk-multinode-269079: {Iface:virbr1 ExpiryTime:2024-07-19 19:58:29 +0000 UTC Type:0 Mac:52:54:00:e2:cc:9c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:multinode-269079 Clientid:01:52:54:00:e2:cc:9c}
	I0719 19:01:19.514067   50285 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined IP address 192.168.39.154 and MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:01:19.514231   50285 host.go:66] Checking if "multinode-269079" exists ...
	I0719 19:01:19.514564   50285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 19:01:19.514615   50285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:01:19.530274   50285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46173
	I0719 19:01:19.530646   50285 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:01:19.531051   50285 main.go:141] libmachine: Using API Version  1
	I0719 19:01:19.531074   50285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:01:19.531373   50285 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:01:19.531565   50285 main.go:141] libmachine: (multinode-269079) Calling .DriverName
	I0719 19:01:19.531830   50285 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 19:01:19.531877   50285 main.go:141] libmachine: (multinode-269079) Calling .GetSSHHostname
	I0719 19:01:19.534403   50285 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:01:19.534835   50285 main.go:141] libmachine: (multinode-269079) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:cc:9c", ip: ""} in network mk-multinode-269079: {Iface:virbr1 ExpiryTime:2024-07-19 19:58:29 +0000 UTC Type:0 Mac:52:54:00:e2:cc:9c Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:multinode-269079 Clientid:01:52:54:00:e2:cc:9c}
	I0719 19:01:19.534874   50285 main.go:141] libmachine: (multinode-269079) DBG | domain multinode-269079 has defined IP address 192.168.39.154 and MAC address 52:54:00:e2:cc:9c in network mk-multinode-269079
	I0719 19:01:19.534998   50285 main.go:141] libmachine: (multinode-269079) Calling .GetSSHPort
	I0719 19:01:19.535165   50285 main.go:141] libmachine: (multinode-269079) Calling .GetSSHKeyPath
	I0719 19:01:19.535316   50285 main.go:141] libmachine: (multinode-269079) Calling .GetSSHUsername
	I0719 19:01:19.535440   50285 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/multinode-269079/id_rsa Username:docker}
	I0719 19:01:19.618893   50285 ssh_runner.go:195] Run: systemctl --version
	I0719 19:01:19.624456   50285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:01:19.638818   50285 kubeconfig.go:125] found "multinode-269079" server: "https://192.168.39.154:8443"
	I0719 19:01:19.638845   50285 api_server.go:166] Checking apiserver status ...
	I0719 19:01:19.638885   50285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 19:01:19.651586   50285 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1113/cgroup
	W0719 19:01:19.661254   50285 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1113/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 19:01:19.661315   50285 ssh_runner.go:195] Run: ls
	I0719 19:01:19.665187   50285 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8443/healthz ...
	I0719 19:01:19.668986   50285 api_server.go:279] https://192.168.39.154:8443/healthz returned 200:
	ok
	I0719 19:01:19.669005   50285 status.go:422] multinode-269079 apiserver status = Running (err=<nil>)
	I0719 19:01:19.669013   50285 status.go:257] multinode-269079 status: &{Name:multinode-269079 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 19:01:19.669027   50285 status.go:255] checking status of multinode-269079-m02 ...
	I0719 19:01:19.669304   50285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 19:01:19.669340   50285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:01:19.684483   50285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33777
	I0719 19:01:19.684919   50285 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:01:19.685379   50285 main.go:141] libmachine: Using API Version  1
	I0719 19:01:19.685398   50285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:01:19.685661   50285 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:01:19.685823   50285 main.go:141] libmachine: (multinode-269079-m02) Calling .GetState
	I0719 19:01:19.687316   50285 status.go:330] multinode-269079-m02 host status = "Running" (err=<nil>)
	I0719 19:01:19.687330   50285 host.go:66] Checking if "multinode-269079-m02" exists ...
	I0719 19:01:19.687631   50285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 19:01:19.687667   50285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:01:19.702359   50285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39039
	I0719 19:01:19.702710   50285 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:01:19.703108   50285 main.go:141] libmachine: Using API Version  1
	I0719 19:01:19.703127   50285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:01:19.703442   50285 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:01:19.703628   50285 main.go:141] libmachine: (multinode-269079-m02) Calling .GetIP
	I0719 19:01:19.706468   50285 main.go:141] libmachine: (multinode-269079-m02) DBG | domain multinode-269079-m02 has defined MAC address 52:54:00:ee:d4:07 in network mk-multinode-269079
	I0719 19:01:19.706836   50285 main.go:141] libmachine: (multinode-269079-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:d4:07", ip: ""} in network mk-multinode-269079: {Iface:virbr1 ExpiryTime:2024-07-19 19:59:39 +0000 UTC Type:0 Mac:52:54:00:ee:d4:07 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-269079-m02 Clientid:01:52:54:00:ee:d4:07}
	I0719 19:01:19.706863   50285 main.go:141] libmachine: (multinode-269079-m02) DBG | domain multinode-269079-m02 has defined IP address 192.168.39.133 and MAC address 52:54:00:ee:d4:07 in network mk-multinode-269079
	I0719 19:01:19.707026   50285 host.go:66] Checking if "multinode-269079-m02" exists ...
	I0719 19:01:19.707299   50285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 19:01:19.707334   50285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:01:19.721335   50285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43647
	I0719 19:01:19.721692   50285 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:01:19.722148   50285 main.go:141] libmachine: Using API Version  1
	I0719 19:01:19.722166   50285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:01:19.722450   50285 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:01:19.722647   50285 main.go:141] libmachine: (multinode-269079-m02) Calling .DriverName
	I0719 19:01:19.722837   50285 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 19:01:19.722854   50285 main.go:141] libmachine: (multinode-269079-m02) Calling .GetSSHHostname
	I0719 19:01:19.725318   50285 main.go:141] libmachine: (multinode-269079-m02) DBG | domain multinode-269079-m02 has defined MAC address 52:54:00:ee:d4:07 in network mk-multinode-269079
	I0719 19:01:19.725714   50285 main.go:141] libmachine: (multinode-269079-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:d4:07", ip: ""} in network mk-multinode-269079: {Iface:virbr1 ExpiryTime:2024-07-19 19:59:39 +0000 UTC Type:0 Mac:52:54:00:ee:d4:07 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-269079-m02 Clientid:01:52:54:00:ee:d4:07}
	I0719 19:01:19.725752   50285 main.go:141] libmachine: (multinode-269079-m02) DBG | domain multinode-269079-m02 has defined IP address 192.168.39.133 and MAC address 52:54:00:ee:d4:07 in network mk-multinode-269079
	I0719 19:01:19.725857   50285 main.go:141] libmachine: (multinode-269079-m02) Calling .GetSSHPort
	I0719 19:01:19.726038   50285 main.go:141] libmachine: (multinode-269079-m02) Calling .GetSSHKeyPath
	I0719 19:01:19.726160   50285 main.go:141] libmachine: (multinode-269079-m02) Calling .GetSSHUsername
	I0719 19:01:19.726299   50285 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19307-14841/.minikube/machines/multinode-269079-m02/id_rsa Username:docker}
	I0719 19:01:19.802495   50285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 19:01:19.815434   50285 status.go:257] multinode-269079-m02 status: &{Name:multinode-269079-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0719 19:01:19.815461   50285 status.go:255] checking status of multinode-269079-m03 ...
	I0719 19:01:19.815821   50285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 19:01:19.815891   50285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 19:01:19.831118   50285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43675
	I0719 19:01:19.831546   50285 main.go:141] libmachine: () Calling .GetVersion
	I0719 19:01:19.832014   50285 main.go:141] libmachine: Using API Version  1
	I0719 19:01:19.832034   50285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 19:01:19.832340   50285 main.go:141] libmachine: () Calling .GetMachineName
	I0719 19:01:19.832517   50285 main.go:141] libmachine: (multinode-269079-m03) Calling .GetState
	I0719 19:01:19.834127   50285 status.go:330] multinode-269079-m03 host status = "Stopped" (err=<nil>)
	I0719 19:01:19.834155   50285 status.go:343] host is not running, skipping remaining checks
	I0719 19:01:19.834163   50285 status.go:257] multinode-269079-m03 status: &{Name:multinode-269079-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.15s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-269079 node start m03 -v=7 --alsologtostderr: (38.749741056s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-269079 node delete m03: (1.751049522s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (186.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-269079 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0719 19:12:32.425976   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-269079 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m6.18489658s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269079 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (186.69s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (42.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-269079
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-269079-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-269079-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (57.814779ms)

                                                
                                                
-- stdout --
	* [multinode-269079-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19307
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19307-14841/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19307-14841/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-269079-m02' is duplicated with machine name 'multinode-269079-m02' in profile 'multinode-269079'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-269079-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-269079-m03 --driver=kvm2  --container-runtime=crio: (41.691934553s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-269079
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-269079: exit status 80 (198.171635ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-269079 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-269079-m03 already exists in multinode-269079-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-269079-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (42.76s)

                                                
                                    
x
+
TestScheduledStopUnix (110.54s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-224649 --memory=2048 --driver=kvm2  --container-runtime=crio
E0719 19:17:15.473578   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-224649 --memory=2048 --driver=kvm2  --container-runtime=crio: (39.021452797s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-224649 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-224649 -n scheduled-stop-224649
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-224649 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-224649 --cancel-scheduled
E0719 19:17:32.428029   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-224649 -n scheduled-stop-224649
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-224649
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-224649 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-224649
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-224649: exit status 7 (65.2326ms)

                                                
                                                
-- stdout --
	scheduled-stop-224649
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-224649 -n scheduled-stop-224649
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-224649 -n scheduled-stop-224649: exit status 7 (61.432595ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-224649" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-224649
--- PASS: TestScheduledStopUnix (110.54s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (181.17s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2438545458 start -p running-upgrade-197194 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2438545458 start -p running-upgrade-197194 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m35.724925711s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-197194 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-197194 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m21.826517948s)
helpers_test.go:175: Cleaning up "running-upgrade-197194" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-197194
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-197194: (1.194614722s)
--- PASS: TestRunningBinaryUpgrade (181.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-793327 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-793327 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (78.249612ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-793327] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19307
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19307-14841/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19307-14841/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (108.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-793327 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-793327 --driver=kvm2  --container-runtime=crio: (1m48.223394011s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-793327 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (108.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-793327 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-793327 --no-kubernetes --driver=kvm2  --container-runtime=crio: (17.492175292s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-793327 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-793327 status -o json: exit status 2 (234.125792ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-793327","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-793327
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (36.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-793327 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-793327 --no-kubernetes --driver=kvm2  --container-runtime=crio: (36.552664645s)
--- PASS: TestNoKubernetes/serial/Start (36.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-793327 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-793327 "sudo systemctl is-active --quiet service kubelet": exit status 1 (194.577735ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-793327
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-793327: (1.286524724s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (42.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-793327 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-793327 --driver=kvm2  --container-runtime=crio: (42.213992337s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (42.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-793327 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-793327 "sudo systemctl is-active --quiet service kubelet": exit status 1 (188.220765ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (147.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1382966242 start -p stopped-upgrade-982139 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0719 19:22:32.427802   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1382966242 start -p stopped-upgrade-982139 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m6.267775873s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1382966242 -p stopped-upgrade-982139 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1382966242 -p stopped-upgrade-982139 stop: (13.144796384s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-982139 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-982139 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m8.112422895s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (147.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-179125 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-179125 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (112.712783ms)

                                                
                                                
-- stdout --
	* [false-179125] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19307
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19307-14841/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19307-14841/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 19:24:09.134485   63041 out.go:291] Setting OutFile to fd 1 ...
	I0719 19:24:09.134617   63041 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:24:09.134628   63041 out.go:304] Setting ErrFile to fd 2...
	I0719 19:24:09.134635   63041 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 19:24:09.134845   63041 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19307-14841/.minikube/bin
	I0719 19:24:09.135448   63041 out.go:298] Setting JSON to false
	I0719 19:24:09.136472   63041 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7592,"bootTime":1721409457,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 19:24:09.136534   63041 start.go:139] virtualization: kvm guest
	I0719 19:24:09.138548   63041 out.go:177] * [false-179125] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 19:24:09.140228   63041 notify.go:220] Checking for updates...
	I0719 19:24:09.141205   63041 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 19:24:09.142490   63041 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 19:24:09.143756   63041 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19307-14841/kubeconfig
	I0719 19:24:09.145063   63041 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19307-14841/.minikube
	I0719 19:24:09.146344   63041 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 19:24:09.147503   63041 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 19:24:09.148959   63041 config.go:182] Loaded profile config "kubernetes-upgrade-821162": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 19:24:09.149064   63041 config.go:182] Loaded profile config "running-upgrade-197194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0719 19:24:09.149157   63041 config.go:182] Loaded profile config "stopped-upgrade-982139": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0719 19:24:09.149251   63041 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 19:24:09.191502   63041 out.go:177] * Using the kvm2 driver based on user configuration
	I0719 19:24:09.192816   63041 start.go:297] selected driver: kvm2
	I0719 19:24:09.192842   63041 start.go:901] validating driver "kvm2" against <nil>
	I0719 19:24:09.192856   63041 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 19:24:09.195321   63041 out.go:177] 
	W0719 19:24:09.196501   63041 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0719 19:24:09.197619   63041 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-179125 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-179125

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-179125

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-179125

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-179125

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-179125

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-179125

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-179125

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-179125

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-179125

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-179125

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-179125"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-179125"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-179125"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-179125

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-179125"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-179125"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-179125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-179125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-179125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-179125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-179125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-179125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-179125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-179125" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-179125"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-179125"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-179125"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-179125"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-179125"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-179125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-179125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-179125" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-179125"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-179125"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-179125"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-179125"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-179125"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Jul 2024 19:24:09 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.50.140:8443
name: running-upgrade-197194
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Jul 2024 19:23:55 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.39.51:8443
name: stopped-upgrade-982139
contexts:
- context:
cluster: running-upgrade-197194
extensions:
- extension:
last-update: Fri, 19 Jul 2024 19:24:09 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: running-upgrade-197194
name: running-upgrade-197194
- context:
cluster: stopped-upgrade-982139
user: stopped-upgrade-982139
name: stopped-upgrade-982139
current-context: running-upgrade-197194
kind: Config
preferences: {}
users:
- name: running-upgrade-197194
user:
client-certificate: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/running-upgrade-197194/client.crt
client-key: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/running-upgrade-197194/client.key
- name: stopped-upgrade-982139
user:
client-certificate: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/stopped-upgrade-982139/client.crt
client-key: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/stopped-upgrade-982139/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-179125

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-179125"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-179125"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-179125"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-179125"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-179125"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-179125"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-179125"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-179125"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-179125"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-179125"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-179125"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-179125"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-179125"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-179125"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-179125"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-179125"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-179125"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-179125"

                                                
                                                
----------------------- debugLogs end: false-179125 [took: 2.992726261s] --------------------------------
helpers_test.go:175: Cleaning up "false-179125" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-179125
--- PASS: TestNetworkPlugins/group/false (3.27s)

                                                
                                    
x
+
TestPause/serial/Start (57.56s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-666146 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
E0719 19:24:26.570196   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/functional-788436/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-666146 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (57.561039817s)
--- PASS: TestPause/serial/Start (57.56s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-982139
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (98.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-971041 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-971041 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (1m38.679446452s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (98.68s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (68.79s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-666146 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-666146 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m8.776201121s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (68.79s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (115.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-889918 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-889918 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (1m55.171122276s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (115.17s)

                                                
                                    
x
+
TestPause/serial/Pause (0.94s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-666146 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.94s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.24s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-666146 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-666146 --output=json --layout=cluster: exit status 2 (241.423162ms)

                                                
                                                
-- stdout --
	{"Name":"pause-666146","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-666146","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.24s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-666146 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.74s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.87s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-666146 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.87s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.88s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-666146 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.88s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.67s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-578533 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-578533 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (53.662971825s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-971041 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fd8ffb3c-cf8b-4647-896f-36267f784b6f] Pending
helpers_test.go:344: "busybox" [fd8ffb3c-cf8b-4647-896f-36267f784b6f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fd8ffb3c-cf8b-4647-896f-36267f784b6f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004419774s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-971041 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-971041 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-971041 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-578533 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2d6fafd1-6115-4ec6-9fb3-9efb9f39e5c1] Pending
helpers_test.go:344: "busybox" [2d6fafd1-6115-4ec6-9fb3-9efb9f39e5c1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2d6fafd1-6115-4ec6-9fb3-9efb9f39e5c1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003523951s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-578533 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-889918 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9dc201e7-6e82-4a8b-8272-038963f23b13] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9dc201e7-6e82-4a8b-8272-038963f23b13] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.003636739s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-889918 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-578533 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-578533 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-889918 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-889918 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (684.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-971041 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-971041 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (11m23.923117892s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-971041 -n no-preload-971041
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (684.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (613.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-578533 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-578533 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (10m13.395404438s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-578533 -n default-k8s-diff-port-578533
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (613.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (546.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-889918 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-889918 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (9m6.110258818s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-889918 -n embed-certs-889918
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (546.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (4.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-570369 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-570369 --alsologtostderr -v=3: (4.27719634s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (4.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-570369 -n old-k8s-version-570369
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-570369 -n old-k8s-version-570369: exit status 7 (58.587761ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-570369 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (44.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-229943 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-229943 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (44.971699236s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (44.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (74.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-179125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-179125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m14.403808027s)
--- PASS: TestNetworkPlugins/group/auto/Start (74.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (83.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-179125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-179125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m23.3213702s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (83.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-229943 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-229943 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.145370504s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-229943 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-229943 --alsologtostderr -v=3: (7.82657436s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (1.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-229943 -n newest-cni-229943
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-229943 -n newest-cni-229943: exit status 7 (76.043296ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-229943 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Done: out/minikube-linux-amd64 addons enable dashboard -p newest-cni-229943 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: (1.358357915s)
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (1.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (52.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-229943 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-229943 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (52.344485313s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-229943 -n newest-cni-229943
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (52.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-179125 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-179125 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-5gkc8" [433b556f-d844-4041-be5c-9ea39a1b35a2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-5gkc8" [433b556f-d844-4041-be5c-9ea39a1b35a2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004796087s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-179125 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-179125 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-179125 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (93.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-179125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0719 19:56:35.232845   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/client.crt: no such file or directory
E0719 19:56:35.238374   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/client.crt: no such file or directory
E0719 19:56:35.249491   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/client.crt: no such file or directory
E0719 19:56:35.269784   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/client.crt: no such file or directory
E0719 19:56:35.310094   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/client.crt: no such file or directory
E0719 19:56:35.391715   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/client.crt: no such file or directory
E0719 19:56:35.552748   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/client.crt: no such file or directory
E0719 19:56:35.874779   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/client.crt: no such file or directory
E0719 19:56:36.515988   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/client.crt: no such file or directory
E0719 19:56:37.796553   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-179125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m33.496125342s)
--- PASS: TestNetworkPlugins/group/calico/Start (93.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-229943 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-f6ad1f6e
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-229943 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-229943 -n newest-cni-229943
E0719 19:56:40.357444   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-229943 -n newest-cni-229943: exit status 2 (262.778785ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-229943 -n newest-cni-229943
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-229943 -n newest-cni-229943: exit status 2 (257.950155ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-229943 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-229943 -n newest-cni-229943
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-229943 -n newest-cni-229943
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (97.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-179125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-179125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m37.99401911s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (97.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (103.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-179125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E0719 19:56:45.478176   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-179125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m43.566337389s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (103.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-l6b4t" [e1b7137d-3ef7-4084-8a2a-1a53b46b604f] Running
E0719 19:56:55.718326   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004590728s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-179125 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-179125 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-k5cxx" [3bc37544-14ab-4656-bbaa-c4a5b4396a72] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-k5cxx" [3bc37544-14ab-4656-bbaa-c4a5b4396a72] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005082278s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-179125 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-179125 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-179125 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (105.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-179125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0719 19:57:31.010305   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/client.crt: no such file or directory
E0719 19:57:32.426916   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/addons-990895/client.crt: no such file or directory
E0719 19:57:41.250815   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/client.crt: no such file or directory
E0719 19:57:46.571024   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/functional-788436/client.crt: no such file or directory
E0719 19:57:57.160090   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/client.crt: no such file or directory
E0719 19:58:01.731965   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/default-k8s-diff-port-578533/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-179125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m45.543964394s)
--- PASS: TestNetworkPlugins/group/flannel/Start (105.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-p2xpk" [fe0ef7ae-8062-4f54-b441-b9d2e29c2ad9] Running
E0719 19:58:09.645269   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/client.crt: no such file or directory
E0719 19:58:09.650526   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/client.crt: no such file or directory
E0719 19:58:09.660786   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/client.crt: no such file or directory
E0719 19:58:09.681046   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/client.crt: no such file or directory
E0719 19:58:09.721424   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/client.crt: no such file or directory
E0719 19:58:09.801743   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/client.crt: no such file or directory
E0719 19:58:09.961999   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/client.crt: no such file or directory
E0719 19:58:10.282611   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/client.crt: no such file or directory
E0719 19:58:10.923448   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005553246s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-179125 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-179125 replace --force -f testdata/netcat-deployment.yaml
E0719 19:58:12.203778   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-5q2x4" [d78d2f46-fa93-4a30-a8aa-1a27fa3c6f0e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0719 19:58:14.764542   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-5q2x4" [d78d2f46-fa93-4a30-a8aa-1a27fa3c6f0e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004194066s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-179125 "pgrep -a kubelet"
E0719 19:58:19.884896   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (14.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-179125 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-89wg6" [13690539-61da-40db-8dbd-56a272b4edd5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-89wg6" [13690539-61da-40db-8dbd-56a272b4edd5] Running
E0719 19:58:30.125515   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/old-k8s-version-570369/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 14.007137649s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (14.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-179125 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-179125 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-179125 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-179125 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-179125 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-4s728" [1b4a48a3-7113-48c1-b0fa-c5b2fa0e58c4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-4s728" [1b4a48a3-7113-48c1-b0fa-c5b2fa0e58c4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004077705s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-179125 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-179125 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-179125 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (60.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-179125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-179125 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m0.673779336s)
--- PASS: TestNetworkPlugins/group/bridge/Start (60.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-179125 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-179125 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-179125 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-pg5xp" [315f7d09-2d90-40f1-a39f-5b2d97605cb4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003338403s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-179125 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-179125 replace --force -f testdata/netcat-deployment.yaml
E0719 19:59:19.080659   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/no-preload-971041/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-djq8n" [bf049c59-7109-4449-99c9-5d190a181efd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-djq8n" [bf049c59-7109-4449-99c9-5d190a181efd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003907639s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-179125 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-179125 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-179125 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-179125 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-179125 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-5f558" [0954b7af-c078-4753-a21a-5dab731e3f4f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0719 19:59:43.525030   22028 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/functional-788436/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-5f558" [0954b7af-c078-4753-a21a-5dab731e3f4f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003780777s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-179125 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-179125 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-179125 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (40/326)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.3/cached-images 0
15 TestDownloadOnly/v1.30.3/binaries 0
16 TestDownloadOnly/v1.30.3/kubectl 0
23 TestDownloadOnly/v1.31.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.31.0-beta.0/binaries 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
50 TestAddons/parallel/Volcano 0
57 TestDockerFlags 0
60 TestDockerEnvContainerd 0
62 TestHyperKitDriverInstallOrUpdate 0
63 TestHyperkitDriverSkipUpgrade 0
114 TestFunctional/parallel/DockerEnv 0
115 TestFunctional/parallel/PodmanEnv 0
134 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
136 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
137 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
138 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
139 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
140 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
163 TestGvisorAddon 0
185 TestImageBuild 0
212 TestKicCustomNetwork 0
213 TestKicExistingNetwork 0
214 TestKicCustomSubnet 0
215 TestKicStaticIP 0
247 TestChangeNoneUser 0
250 TestScheduledStopWindows 0
252 TestSkaffold 0
254 TestInsufficientStorage 0
258 TestMissingContainerUpgrade 0
276 TestStartStop/group/disable-driver-mounts 0.18
280 TestNetworkPlugins/group/kubenet 2.85
288 TestNetworkPlugins/group/cilium 3.57
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:871: skipping: crio not supported
--- SKIP: TestAddons/parallel/Volcano (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-994300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-994300
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-179125 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-179125

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-179125

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-179125

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-179125

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-179125

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-179125

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-179125

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-179125

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-179125

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-179125

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-179125"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-179125"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-179125"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-179125

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-179125"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-179125"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-179125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-179125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-179125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-179125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-179125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-179125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-179125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-179125" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-179125"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-179125"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-179125"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-179125"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-179125"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-179125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-179125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-179125" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-179125"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-179125"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-179125"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-179125"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-179125"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Jul 2024 19:23:56 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.50.140:8443
name: running-upgrade-197194
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Jul 2024 19:23:55 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.39.51:8443
name: stopped-upgrade-982139
contexts:
- context:
cluster: running-upgrade-197194
user: running-upgrade-197194
name: running-upgrade-197194
- context:
cluster: stopped-upgrade-982139
user: stopped-upgrade-982139
name: stopped-upgrade-982139
current-context: ""
kind: Config
preferences: {}
users:
- name: running-upgrade-197194
user:
client-certificate: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/running-upgrade-197194/client.crt
client-key: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/running-upgrade-197194/client.key
- name: stopped-upgrade-982139
user:
client-certificate: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/stopped-upgrade-982139/client.crt
client-key: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/stopped-upgrade-982139/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-179125

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-179125"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-179125"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-179125"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-179125"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-179125"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-179125"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-179125"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-179125"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-179125"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-179125"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-179125"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-179125"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-179125"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-179125"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-179125"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-179125"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-179125"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-179125"

                                                
                                                
----------------------- debugLogs end: kubenet-179125 [took: 2.692441198s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-179125" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-179125
--- SKIP: TestNetworkPlugins/group/kubenet (2.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-179125 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-179125

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-179125

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-179125

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-179125

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-179125

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-179125

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-179125

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-179125

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-179125

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-179125

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-179125"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-179125"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-179125"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-179125

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-179125"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-179125"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-179125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-179125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-179125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-179125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-179125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-179125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-179125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-179125" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-179125"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-179125"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-179125"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-179125"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-179125"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-179125

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-179125

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-179125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-179125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-179125

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-179125

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-179125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-179125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-179125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-179125" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-179125" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-179125"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-179125"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-179125"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-179125"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-179125"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Jul 2024 19:24:09 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.50.140:8443
name: running-upgrade-197194
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19307-14841/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Jul 2024 19:23:55 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.39.51:8443
name: stopped-upgrade-982139
contexts:
- context:
cluster: running-upgrade-197194
extensions:
- extension:
last-update: Fri, 19 Jul 2024 19:24:09 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: running-upgrade-197194
name: running-upgrade-197194
- context:
cluster: stopped-upgrade-982139
user: stopped-upgrade-982139
name: stopped-upgrade-982139
current-context: running-upgrade-197194
kind: Config
preferences: {}
users:
- name: running-upgrade-197194
user:
client-certificate: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/running-upgrade-197194/client.crt
client-key: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/running-upgrade-197194/client.key
- name: stopped-upgrade-982139
user:
client-certificate: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/stopped-upgrade-982139/client.crt
client-key: /home/jenkins/minikube-integration/19307-14841/.minikube/profiles/stopped-upgrade-982139/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-179125

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-179125"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-179125"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-179125"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-179125"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-179125"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-179125"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-179125"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-179125"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-179125"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-179125"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-179125"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-179125"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-179125"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-179125"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-179125"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-179125"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-179125"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-179125" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-179125"

                                                
                                                
----------------------- debugLogs end: cilium-179125 [took: 3.424085384s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-179125" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-179125
--- SKIP: TestNetworkPlugins/group/cilium (3.57s)

                                                
                                    
Copied to clipboard